Curated for content, computing, data, information, and digital experience professionals

Category: Computing & data (Page 54 of 91)

Computing and data is a broad category. Our coverage of computing is largely limited to software, and we are mostly focused on unstructured data, semi-structured data, or mixed data that includes structured data.

Topics include computing platforms, analytics, data science, data modeling, database technologies, machine learning / AI, Internet of Things (IoT), blockchain, augmented reality, bots, programming languages, natural language processing applications such as machine translation, and knowledge graphs.

Related categories: Semantic technologies, Web technologies & information standards, and Internet and platforms.

Elastic and Confluent to enhance Kafka and Elasticsearch experience

Elastic announced an expanded strategic partnership with Confluent, Inc. to deliver the best integrated product experience to the Apache Kafka and Elasticsearch community. Through this alliance, Elastic and Confluent will enhance existing product integrations and jointly develop new capabilities to help users easily combine the benefits of the Elastic Stack and Kafka. Elastic and Confluent plan further enhancements to the product experience for users by:

  • Strengthening the native integration between Elastic Cloud and Confluent Cloud
  • Enriching the Elasticsearch Service Sink Connector in Confluent
  • Developing packaged joint solutions for specific use cases
  • Introducing easier ways to output data from Kafka in an Elastic Common Schema

Elastic has long provided native support for Kafka to help centralized logging and monitoring customers monitor the health and performance of their Kafka pipelines in Elasticsearch. In addition, users have the choice of a jointly built and fully managed Elasticsearch Service Sink Connector in Confluent Cloud that eliminates the need for customers to take on the difficult task of managing their own Kafka clusters. This gives organizations the ability to seamlessly stream data moving through Kafka into Elasticsearch on all major cloud providers, including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud.

https://www.confluent.io/blog/confluent-and-elastic-partner-to-deliver-real-time-analytics-monitoring-optimized-search/

MarkLogic adds AWS Glue Connector

MarkLogic Corporation announced the general availability of a custom connector for AWS Glue, a managed, serverless data integration service to create, run, and monitor data integration pipelines. The MarkLogic Connector for Glue further integrates MarkLogic with the AWS cloud ecosystem and makes it easy for developers to quickly run extract, transform, and load (ETL) jobs using familiar tools. The new connector is easily accessible in the AWS marketplace and can be used within Glue Studio, a visual interface that embraces a low code/no code approach to data integration.

The MarkLogic Connector for AWS Glue can be used for both the ingestion and consumption of data into and out of a MarkLogic Data Hub. Users can load data from and export data to various AWS services like Amazon S3, Amazon Redshift, and third-party data stores like Oracle and Snowflake. Data flows can be in bulk or streaming and the connector is designed for complex operational and analytical workloads.

https://www.marklogic.com/blog/marklogic-connector-for-aws-glue-now-available-on-aws-marketplace/

Cortical.io announced Message intelligence 2.1

Cortical.io announced Message intelligence 2.1, an intelligent document processing solution (IDP) that provides high accuracy in filtering, classification, and extraction of emails, attachments, and other types of unstructured documents. Leveraging Cortical.io’s method for natural language understanding (NLU), Message Intelligence 2.1 enables higher productivity, fewer false positives, and less manual intervention. It also requires far less material to train custom classifiers and extraction models, speeding up time to production/value. This is valuable around situations where there is a lack of training material.

The product allows a user to easily create pipelines to intelligently process documents. A key capability of Message Intelligence 2.1 is that it allows a subject matter expert to easily create pipelines with components including inputs, filters, classifiers, extractions, and actions. The product comes with tools for building classifiers and extraction models, so that subject matter experts do not need the intervention of AI experts or data scientists to adapt the system to the specific classification and extraction needs of their organization. Cortical.io’s Message Intelligence solution is especially useful in situations where large quantities of messages and documents come in daily through emails, website submissions or social media. Pricing based on volume of emails and/or documents processed.

https://www.cortical.io

Trifacta expands data connectivity to 180+ sources

Trifacta announced it is expanding its platform’s data integration capabilities by providing universal data connectivity to more than 180 data sources. These pre-built connectors make it faster and easier for more users in organizations of any size to connect to more data. To build curated, accessible data products for advanced data insights and analytics, data engineers and analysts need flexible, seamless access to data, regardless of its source. The Trifacta platform already offers connectivity to a wide range of data sources. Universal connectivity expands the range of use cases possible with the Trifacta platform, including but not limited to:

  • Collaboration and Support: SmartSheet, Airtable, Confluence, Microsoft Sharepoint, and JIRA.
  • Resource Planning and Visibility: SAP ERP, SAP HANA, and Microsoft Dynamics.
  • Finance and Accounting: Workday, Netsuite, Xero, ADP, Quickbooks, and Sage.
  • Marketing, Sales, e-Commerce: Salesforce, Google Analytics, Facebook Ads, Twitter Ads, LinkedIn Ads, Amazon Marketplace, and Shopify.
  • Cloud Data Warehouses and Databases: BigQuery, Snowflake, Redshift, Oracle, SQL Server, PostgreSQL, MySQL, MongoDB, Teradata, and Hive.
  • Files & File Systems: S3, GCS, ADLS, HDFS, SFTP, JSON, XML, Excel, and Google Sheets. Trifacta is also enhancing support for semi-structured data, like JSON and XML.
  • Cloud Data Exchanges: AWS Data Exchange, Snowflake Data Marketplace, and Google Public Datasets.

https://www.trifacta.com/integrations/

DataStax eases migrations from Apache Cassandra to DataStax Astra

DataStax announced the general availability of a new Zero-Downtime Cloud Migration tool that enables organizations to seamlessly migrate live data from self-managed Apache Cassandra instances to the company’s fully managed serverless Cassandra offering, DataStax Astra with no downtime. The Apache Cassandra open source database is often used for workloads that need to deliver massive amounts of data to users around the world with high reliability. As such, many Cassandra production applications are business critical, always on, and downtime is not an option. With DataStax’s new migration tool, enterprises can easily migrate live production Cassandra or DataStax Enterprise workloads to the DataStax Astra database-as-a-service (DBaaS) to quickly take advantage of the cost savings and other benefits of fully-managed, serverless Cassandra. The DataStax Zero-Downtime Migration tool is available for zero cost, and it comes with every DataStax Astra subscription. For more information on the fastest way to get up and running on Astra without any downtime, see

https://www.datastax.com/blog/four-steps-migrate-live-data-apache-cassandra-astra-zero-downtime

ThoughtSpot acquires SeekWell

ThoughtSpot, provider of search & AI-driven analytics, announced it has entered into a definitive agreement to acquire SeekWell. With SeekWell, customers will be able to operationalize their analytics and use SQL to push cloud data insights directly to business applications. As the companies integrate their offerings, the combination of ThoughtSpot and SeekWell will let users use natural language search to pull data from cloud data warehouses, modify it with productivity applications like Google Spreadsheets, then automatically and sync it back to business applications like Salesforce. With SeekWell and ThoughtSpot, customers can find insights easier, and close data loops by pushing insights directly back to applications and scaling data-driven decision making in the process.

SeekWell capabilities are available from ThoughtSpot starting today. As SeekWell becomes fully integrated into ThoughtSpot, this entire process will be powered by natural language search. No SQL will be required; instead, customers can use search to find data in the cloud, enable modification via productivity apps, and sync it with business apps. ThoughtSpot will also invest in building new business app integrations, expanding the number of end destinations for SeekWell.

https://www.thoughtspot.com ▪︎ https://seekwell.io

SciBite launches AI-driven semantic search platform

SciBite, an Elsevier semantic technology company, announced the launch of SciBiteSearch, a scientific search and analytics platform that offers interrogation and analysis capabilities across unstructured and structured data, from public and proprietary sources. SciBiteSearch provides scientists with access to domain specific ontology and AI-powered search capabilities.

SciBiteSearch uses knowledge graphs to augment searches and deliver not only items relevant to the query but the structure and relationship between them. The addition of AI enables natural language understanding. SciBiteSearch can integrate data across a range of use cases including:

  • Unify multiple data sources into a single solution, designed for departments wanting their own tailored search tool. For example, combining public biomedical literature, clinical trials, and grants with proprietary data.
  • Incorporate full-text biomedical literature from publishers to better address researchers’ discovery needs. For example, users can load subscribed licensed data from partner publishers or content brokers.
  • Enable users to get accurate search results without the need to understand the complexities of Named Entity Recognition (NER), its underlying data structures, or the functions required to surface.

SciBiteSearch creates sophisticated query and assertion indices created using SciBite’s tools and ontologies. A streaming load API, connectors, and parsers for different sources and content types let it load and process content to make it searchable.

http://scibite.com/scibitesearch

Arthur releases NLP model monitoring solution

Arthur, the machine learning model monitoring company, released a suite of new tools and features for monitoring natural language processing models. Natural language processing is one of the most widely adopted machine learning technologies in the enterprise. But organizations often struggle to find the right tools to monitor these models.

The Arthur platform now offers advanced performance monitoring for NLP models, including tracking data drift, bias detection, and prediction-level model explainability. Monitoring NLP models for data drift involves comparing the statistical similarity of new input documents to the documents used to train the model. The Arthur platform automatically alerts you when your input documents or output text starts drifting beyond pre-configured thresholds.

Arthur now also offers bias detection capabilities for NLP models, allowing data science teams to uncover differences in accuracy and other performance measures across different subgroups to identify and fix unfair model bias. The platform also offers performance-bias analysis for tabular models. The Arthur team has also released a new set of explainability tools for NLP models, providing token-level insights for language models. Organizations can now understand which specific words within a document contributed most to a given prediction, even for black-box models.

https://www.arthur.ai

« Older posts Newer posts »

© 2025 The Gilbane Advisor

Theme by Anders NorenUp ↑