Curated for content, computing, and digital experience professionals

Category: Computing & data (Page 38 of 80)

Computing and data is a broad category. Our coverage of computing is largely limited to software, and we are mostly focused on unstructured data, semi-structured data, or mixed data that includes structured data.

Topics include computing platforms, analytics, data science, data modeling, database technologies, machine learning / AI, Internet of Things (IoT), blockchain, augmented reality, bots, programming languages, natural language processing applications such as machine translation, and knowledge graphs.

Related categories: Semantic technologies, Web technologies & information standards, and Internet and platforms.

Expert.ai announces general availability of hybrid natural language platform

Following an early access program launched in March, expert.ai announced the general availability of its platform for designing, developing, testing, deploying and monitoring scalable natural language solutions. The expert.ai Platform uses an exclusive hybrid AI approach honed from hundreds of real-world implementations. Comprehensive and easy to use, it combines symbolic AI and machine learning techniques to ensure the best possible accuracy for each individual use case with transparency of explainable AI. Easy to deploy and operate, the cloud-based expert.ai Platform helps organizations accelerate, augment and expand expertise for any job or process that involves language. By turning any text-based document into structured data, the platform supports knowledge discovery, process automation and decision making with the flexibility to design language models for any use case.

https://expert.ai

Retresco elevates natural language generation

With the release of the new text variants suggestions feature, Berlin-based AI company Retresco, an expert in Natural Language Generation, provides a software function that uses artificial intelligence to automatically produce phrasing suggestions in the form of complete sentences. This function automates the creative writing process, shortens it, and increases text variance without user intervention, augmenting human creativity with AI.

Retresco’s NLG software (textengine.io) independently suggests high-quality texts based on a sample sentence entered within milliseconds and without complicated setup by the user. The automatically generated texts can be adopted either partially or completely and be revised at any time. Human and machine thus work hand in hand: the software generates data-based text suggestions, while the human user ultimately decides how to use them. The result: significantly greater text variance, more efficient processes in the creation of texts, a better user experience, and support for the most difficult aspect of writing: creativity. This feature is particularly relevant in cases where large volumes of versatile, high-quality text are required at frequent intervals and often under time pressure. In e-commerce, for instance, online stores need numerous product descriptions that, above all, have to be highly varied and SEO-optimized.

https://www.retresco.com/augmenting-creativity/

Contentsquare launches cookieless experience analytics solution

To help brands build better digital experiences and establish greater digital trust with customers, Contentsquare, a provider of digital experience analytics, announced a an analytics solution that allows teams to access critical revenue insights without any use of cookies. This announcement comes on the heels of a recent $500M Series E round led by Softbank. Contentsquare has never relied on third-party cookies, and instead aggregates and analyzes trillions of consumer interactions that demonstrate intent, such as mouse movements, touch and mobile interactions, to help brands deliver the best possible experiences to their customers. The solution will now give businesses the option to turn off both first and third-party cookies. This latest innovation extends Contentsquare’s privacy-first approach.

https://contentsquare.com

DataStax unveils Astra Streaming

DataStax announced Astra Streaming, a scalable, multi-cloud messaging and event streaming platform built on Apache Pulsar. Astra Streaming is integrated with DataStax’s marquee serverless database, Astra DB, to deliver a multi-cloud solution for managing both data in motion and data at rest. With the introduction of Astra Streaming, DataStax aims to deliver on its vision of an open data stack for today’s multi-cloud applications that require massive scale, zero-downtime availability, and high performance. Astra Streaming Features:

  • Global scale, cloud-native streaming, powered by Apache Pulsar without the complexity of self-managed solutions
  • Compatible with Apache Kafka and Java Messaging Service
  • Multi-cloud
  • Simple developer APIs for streaming
  • Handles high-volume queuing and pub-sub messaging and more complex messaging patterns
  • Pay-as-you-go pricing

Astra Streaming is available today in a beta version. To get started with Astra Streaming, create a free account.

https://www.datastax.com

DataRobot releases DataRobot 7.1

In its second major release of the year, DataRobot announced several product upgrades to its Augmented Intelligence platform designed to further democratize AI. The 7.1 release introduces:

  • MLOps Management Agents – DataRobot’s MLOps Management Agents provide advanced lifecycle management for an organization’s remote models. Management Agents understand the state of any remote model regardless of how they were created or where they are running, and can automate various tasks.
  • Feature Discovery Push-Down Integration for Snowflake – Joint DataRobot and Snowflake customers can benefit from the automatic discovery and computation of new features for their models directly in the Snowflake Data Cloud.
  • Time Series Eureqa Model Enhancements – DataRobot Automated Time Series now runs its unique Eureqa forecasting models as part of the regular Autopilot process. Eureqa models are based on the idea that a genetic algorithm can fit different analytic expressions to trained data and return a mathematical formula as a machine learning model.
  • No-Code AI App Builder  the No-Code AI App Builder allows customers to quickly turn any deployed model into a rich AI application without a single line of code.

Additional product upgrades: Data Prep for Time Series, Nowcasting for Time-Aware Models, Automated AI Reports, and Prediction Jobs and Scheduling UI.

https://www.datarobot.com

W3C updates candidate for decentralized identifiers

The Decentralized Identifier Working Group has just published a second Candidate Recommendation Snapshot for the Decentralized Identifiers (DIDs) v1.0.

This document defines Decentralized identifiers (DIDs), a new type of identifier that enables verifiable, decentralized digital identity. A DID identifies any subject (e.g., a person, organization, thing, data model, abstract entity, etc.) that the controller of the DID decides that it identifies. In contrast to typical, federated identifiers, DIDs have been designed so that they may be decoupled from centralized registries, identity providers, and certificate authorities. DIDs are URIs that associate a DID subject with a DID document allowing trustable interactions associated with that subject. Each DID document can express cryptographic material, verification methods, or services, which provide a set of mechanisms enabling a DID controller to prove control of the DID.

Candidate Recommendation means that the Working Group considers the technical design to be complete, and is seeking implementation feedback on the document. The group is keen to get comments and implementation experiences on this specification as issues raised in the document’s Github repository. The group expects to satisfy the implementation goals (i.e., at least two, independent implementations for each of the test cases) by July 17, 2021.

https://www.w3.org/TR/did-core/

Dataiku announces managed, online service

Dataiku, an Enterprise AI platform, announced the launch of Dataiku Online, which makes their AI and machine learning platform available as an online service for smaller, more agile organizations. Dataiku Online provides everything a company needs to build analytics projects, with data preparation, data visualization, AutoML, reporting, and dashboards all in one place. It has all of the key capabilities as Dataiku’s flagship product, with a few configuration and design tweaks to match its managed usage, and without the IT resources required to manage an enterprise-scale deployment. The interface enables collaboration across a broad range of business and data team users, and teams can start within minutes, connect to their data sources, and start deploying projects in days, at a price point that makes sense for their current stage of growth.

With Dataiku Online, businesses can take advantage of cloud data stack and data storage tools from Snowflake, Google Big Query, Amazon Redshift. There is a pre-integrated version of Dataiku Online available through the Snowflake Marketplace.

Dataiku is also launching an offering for specifically for startups. Seed-stage companies, and startups founded less than five years ago or with less than $10M in funding can be eligible for discounted pricing specific to their current stage of growth. 

https://www.dataiku.com/product/dataiku-as-a-managed-service

Amplitude unveils experimentation application for digital optimization

Amplitude, a Digital Optimization System, introduced Amplitude Experiment, an experimentation solution combining customer behavior and product analytics. Amplitude Experiment provides organizations an end-to-end experimentation and delivery workflow that integrates customer data into every step from generating a hypothesis to targeting users to measuring results. Organizations can run higher impact A/B tests and remotely configure experiences for key segments. 

Organizations can get stuck in low-value activities that don’t drive growth, like testing small tweaks to copy and color changes or using basic on/off toggling to manage new feature release risk, or they waste resources and time on experiments that are doomed to fail, like starting from a weak hypothesis or not being able to reach the right segments. The Behavioral Graph and Amplitude’s Digital Optimization System, Amplitude Experiment addresses these challenges by resolving the underlying issues of experiment design, targeting, identity resolution and analysis. With the Amplitude Experiment solution, organizations have a complete learning and growth loop from insight to action to testing and delivery in a single system. 

  • Amplitude Analytics identifies problems, uncovers opportunities, and measures impact. 
  • Amplitude Recommend matches the right messages, content, and items to each individual user.
  • Amplitude Experiment tests bets and serves the best experience to customers. 

https://amplitude.com/amplitude-experiment

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑