Curated for content, computing, and digital experience professionals

Category: Computing & data (Page 49 of 90)

Computing and data is a broad category. Our coverage of computing is largely limited to software, and we are mostly focused on unstructured data, semi-structured data, or mixed data that includes structured data.

Topics include computing platforms, analytics, data science, data modeling, database technologies, machine learning / AI, Internet of Things (IoT), blockchain, augmented reality, bots, programming languages, natural language processing applications such as machine translation, and knowledge graphs.

Related categories: Semantic technologies, Web technologies & information standards, and Internet and platforms.

DataStax unveils Astra Streaming

DataStax announced Astra Streaming, a scalable, multi-cloud messaging and event streaming platform built on Apache Pulsar. Astra Streaming is integrated with DataStax’s marquee serverless database, Astra DB, to deliver a multi-cloud solution for managing both data in motion and data at rest. With the introduction of Astra Streaming, DataStax aims to deliver on its vision of an open data stack for today’s multi-cloud applications that require massive scale, zero-downtime availability, and high performance. Astra Streaming Features:

  • Global scale, cloud-native streaming, powered by Apache Pulsar without the complexity of self-managed solutions
  • Compatible with Apache Kafka and Java Messaging Service
  • Multi-cloud
  • Simple developer APIs for streaming
  • Handles high-volume queuing and pub-sub messaging and more complex messaging patterns
  • Pay-as-you-go pricing

Astra Streaming is available today in a beta version. To get started with Astra Streaming, create a free account.

https://www.datastax.com

DataRobot releases DataRobot 7.1

In its second major release of the year, DataRobot announced several product upgrades to its Augmented Intelligence platform designed to further democratize AI. The 7.1 release introduces:

  • MLOps Management Agents – DataRobot’s MLOps Management Agents provide advanced lifecycle management for an organization’s remote models. Management Agents understand the state of any remote model regardless of how they were created or where they are running, and can automate various tasks.
  • Feature Discovery Push-Down Integration for Snowflake – Joint DataRobot and Snowflake customers can benefit from the automatic discovery and computation of new features for their models directly in the Snowflake Data Cloud.
  • Time Series Eureqa Model Enhancements – DataRobot Automated Time Series now runs its unique Eureqa forecasting models as part of the regular Autopilot process. Eureqa models are based on the idea that a genetic algorithm can fit different analytic expressions to trained data and return a mathematical formula as a machine learning model.
  • No-Code AI App Builder  the No-Code AI App Builder allows customers to quickly turn any deployed model into a rich AI application without a single line of code.

Additional product upgrades: Data Prep for Time Series, Nowcasting for Time-Aware Models, Automated AI Reports, and Prediction Jobs and Scheduling UI.

https://www.datarobot.com

W3C updates candidate for decentralized identifiers

The Decentralized Identifier Working Group has just published a second Candidate Recommendation Snapshot for the Decentralized Identifiers (DIDs) v1.0.

This document defines Decentralized identifiers (DIDs), a new type of identifier that enables verifiable, decentralized digital identity. A DID identifies any subject (e.g., a person, organization, thing, data model, abstract entity, etc.) that the controller of the DID decides that it identifies. In contrast to typical, federated identifiers, DIDs have been designed so that they may be decoupled from centralized registries, identity providers, and certificate authorities. DIDs are URIs that associate a DID subject with a DID document allowing trustable interactions associated with that subject. Each DID document can express cryptographic material, verification methods, or services, which provide a set of mechanisms enabling a DID controller to prove control of the DID.

Candidate Recommendation means that the Working Group considers the technical design to be complete, and is seeking implementation feedback on the document. The group is keen to get comments and implementation experiences on this specification as issues raised in the document’s Github repository. The group expects to satisfy the implementation goals (i.e., at least two, independent implementations for each of the test cases) by July 17, 2021.

https://www.w3.org/TR/did-core/

Dataiku announces managed, online service

Dataiku, an Enterprise AI platform, announced the launch of Dataiku Online, which makes their AI and machine learning platform available as an online service for smaller, more agile organizations. Dataiku Online provides everything a company needs to build analytics projects, with data preparation, data visualization, AutoML, reporting, and dashboards all in one place. It has all of the key capabilities as Dataiku’s flagship product, with a few configuration and design tweaks to match its managed usage, and without the IT resources required to manage an enterprise-scale deployment. The interface enables collaboration across a broad range of business and data team users, and teams can start within minutes, connect to their data sources, and start deploying projects in days, at a price point that makes sense for their current stage of growth.

With Dataiku Online, businesses can take advantage of cloud data stack and data storage tools from Snowflake, Google Big Query, Amazon Redshift. There is a pre-integrated version of Dataiku Online available through the Snowflake Marketplace.

Dataiku is also launching an offering for specifically for startups. Seed-stage companies, and startups founded less than five years ago or with less than $10M in funding can be eligible for discounted pricing specific to their current stage of growth. 

https://www.dataiku.com/product/dataiku-as-a-managed-service

Amplitude unveils experimentation application for digital optimization

Amplitude, a Digital Optimization System, introduced Amplitude Experiment, an experimentation solution combining customer behavior and product analytics. Amplitude Experiment provides organizations an end-to-end experimentation and delivery workflow that integrates customer data into every step from generating a hypothesis to targeting users to measuring results. Organizations can run higher impact A/B tests and remotely configure experiences for key segments. 

Organizations can get stuck in low-value activities that don’t drive growth, like testing small tweaks to copy and color changes or using basic on/off toggling to manage new feature release risk, or they waste resources and time on experiments that are doomed to fail, like starting from a weak hypothesis or not being able to reach the right segments. The Behavioral Graph and Amplitude’s Digital Optimization System, Amplitude Experiment addresses these challenges by resolving the underlying issues of experiment design, targeting, identity resolution and analysis. With the Amplitude Experiment solution, organizations have a complete learning and growth loop from insight to action to testing and delivery in a single system. 

  • Amplitude Analytics identifies problems, uncovers opportunities, and measures impact. 
  • Amplitude Recommend matches the right messages, content, and items to each individual user.
  • Amplitude Experiment tests bets and serves the best experience to customers. 

https://amplitude.com/amplitude-experiment

Snowflake adds features

Snowflake unveiled new product innovations for the Data Cloud, including data programmability, global data governance, and platform optimizations.

Data programmability:

  • Snowpark. With initial support for Java and Scala, Snowflake’s developer experience, Snowpark, allows data engineers, data scientists, and developers to build using their preferred language and execute these within Snowflake.
  • Java UDFs. With Java user-defined-functions (UDFs), customers can bring their custom code and business logic to Snowflake.
  • Unstructured data. Snowflake’s unstructured data support enables customers to store, govern, process, and share file data alongside their structured and semi-structured data.
  • SQL API. The Snowflake SQL API enables applications to call Snowflake directly through a REST API.

Global governance:

  • Classification. Snowflake’s classification capability automatically detects personally identifiable information (PII) in a given table and leverages the tagging framework to annotate the data.
  • Anonymized views. This can be used to protect privacy and identity in a dataset.

Platform:

  • Improved Storage Economics. Better compression, and reduced storage costs.
  • Improved Support for Interactive Experiences. Updates released for high volume and low latency workload requirements improve query throughput on a single compute cluster.
  • Usage Dashboard. New usage dashboard helps customers better understand usage and costs across the platform.

https://www.snowflake.com

GraphDB 9.8 brings text mining and Kafka connectivity

Ontotext announced the realize of GraphDB 9.8, which offers text mining integration, notifications over Kafka, Helm charts, and performance improvements. The text mining plugin comes with out-of-the-box support for text analytic services such as Ontotext’s Tag API, GATE Cloud, and spaCy server, as well as an expressive mapping language, to register new services without coding. The extracted text annotations can be manipulated with SPARQL and either returned to the caller for further processing or stored directly into the repository where they will enrich the existing knowledge graph. This functionality covers a number of use-cases that rely on both RDF and text analytics.

The Kafka connector provides a means to synchronize changes to the RDF model to any downstream system via the Apache Kafka framework. Each Kafka connector instance will stay automatically up-to-date with the GraphDB repository data. The implementation is built on the same framework as the existing Elasticsearch, Solr and Lucene connectors and allows for precise mapping from RDF to JSON, such as defining fields based on property chains, nested document support as well as advanced filtering by type, literal language or a complex expression. GraphDB 9.8 comes with standard Helm charts and instructions that can help you get started with GraphDB Enterprise Edition on Kubernetes.

https://www.ontotext.com/company/news/graphdb-9-8/

Kofax updates Intelligent Automation Platform

Kofax, a supplier of Intelligent Automation software for digital workflow transformation, announced the latest release of its Intelligent Automation Platform. Kofax TotalAgility, the workflow orchestration engine within the company’s Intelligent Automation Platform, has been enhanced with 50 new low-code, document intelligence, process orchestration and connected systems capabilities.

  • Faster development of automation workflows. A new user experience extends low-code to more business users by empowering citizen developers and analysts to easily setup complex workflows. An enhanced business rules engine allows analysts to execute decision strategies by using visual condition rules and setting up custom services without needing to code.
  • Expanded low-code support for cognitive capture. Nearly 90 percent of data generated today is unstructured. Kofax’s cognitive capture functionality enables professional developers to build advanced AI and Capture models, allowing them to train the system and effectively incorporate document intelligence.
  • One-click document classification. Kofax TotalAgility enables citizen developers and business analysts to enhance advanced document classification models, empowering them to rapidly and visually train, identify and classify documents.
  • Additional low-code integration options. Simplified integrations include new support for OpenAPI and grouping of data, enabling data models to be defined manually or using simple JavaScript Object Notation (JSON). Enhanced support for industry-standard OAuth 2.0 provides greater authorization and authentication options.

https://www.kofax.com/products/totalagility/release-highlights

« Older posts Newer posts »

© 2025 The Gilbane Advisor

Theme by Anders NorenUp ↑