Curated for content, computing, and digital experience professionals

Category: Computing & data (Page 4 of 83)

Computing and data is a broad category. Our coverage of computing is largely limited to software, and we are mostly focused on unstructured data, semi-structured data, or mixed data that includes structured data.

Topics include computing platforms, analytics, data science, data modeling, database technologies, machine learning / AI, Internet of Things (IoT), blockchain, augmented reality, bots, programming languages, natural language processing applications such as machine translation, and knowledge graphs.

Related categories: Semantic technologies, Web technologies & information standards, and Internet and platforms.

Anthropic announces Claude for Enterprise

Anthropic announced the Claude Enterprise plan to help organizations securely collaborate with Claude using internal knowledge. The Claude Enterprise plan offers an expanded 500K context window, more usage capacity, and a native GitHub integration so you can work on entire codebases with Claude. It also includes enterprise-grade security features—like SSO, role-based permissions, and admin tooling—that help protect your data and team.

With Claude, your organization’s knowledge is easier to share and reuse, enabling every individual on the team to quickly and consistently produce their best work. At the same time, your data is protected. We do not train Claude on your conversations and content. By integrating Claude with your organization’s knowledge, you can scale expertise across more projects, decisions and teams.

When you combine expanded context windows with Projects and Artifacts, Claude becomes an end-to-end solution to help your team take any initiative from idea to high-quality work output. For example, marketers can turn market trends into a compelling campaign. Product managers can upload product specifications for Claude to build an interactive prototype. Engineers can connect codebases for help on troubleshooting errors and identifying optimizations.

https://www.anthropic.com/news/claude-for-enterprise

Couchbase expands cloud database platform with Capella Columnar and vector search

Couchbase, Inc. launched Capella Columnar on AWS, to help organizations streamline the development of adaptive applications by enabling real-time data analysis alongside operational workloads within a single database platform. Also generally available today is Couchbase Mobile with vector search, which makes it possible for customers to offer similarity and hybrid search in their applications on mobile and at the edge, and Capella Free Tier, a free developer environment.

Capella Columnar addresses the challenge of parsing, transforming and persisting JSON data into an analysis-ready columnar format. It supports real-time, multisource ingestion of data from Couchbase, and systems like Confluent Cloud to draw data from third-party JSON or SQL systems. Capella Columnar makes analysis easy by using Capella iQ, an AI coding assistant that writes SQL++ so the developer doesn’t need to wait for the BI team to run analytics for them. Once an important metric is calculated, it can be written back to the operational side of Capella, which can use the metric within the application.

Using vector search on-device with Couchbase Lite, the embedded database for mobile and IoT applications, mobile developers can now leverage vector search at the edge for building semantic search and retrieval-augmented generation (RAG) applications.

https://www.couchbase.com/blog/free-tier-capella-columnar-mobile-vector-search-and-more/

Elastic returns to open source license for Elasticsearch and Kibana

Elastic, a Search AI Company, announced that it is adding the GNU Affero General Public License v3 (AGPL) as an option for users to license the free part of the Elasticsearch and Kibana source code that is available under Server Side Public License 1.0 (SSPL 1.0) and Elastic License 2.0 (ELv2).

With the addition of AGPL, an open source license approved by the Open Source Initiative (OSI), Elasticsearch and Kibana will be officially considered open source and enable Elastic’s customers and community to use, modify, redistribute, and collaborate on Elastic’s source code under a well-known open source license.

Adding AGPL will also enable greater engagement and adoption across our users in areas including vector search, further increasing the popularity of Elasticsearch as a runtime platform for RAG and building GenAI applications.

The addition of AGPL as a license option does not affect existing users working with either SSPL or ELv2, and there will be no change to Elastic’s binary distributions. Similarly, for users building applications or using plugins on Elasticsearch or Kibana, nothing changes — Elastic’s client libraries will continue to be licensed under Apache 2.0.

https://www.elastic.co/blog/elasticsearch-is-open-source-again

Microsoft introduces Bing generative search

From The Microsoft Bing Blog…

… Today, we’re excited to share an early view of our new generative search experience which is currently shipping to a small percentage of user queries …

This new experience combines the foundation of Bing’s search results with the power of large and small language models (LLMs and SLMs). It understands the search query, reviews millions of sources of information, dynamically matches content, and generates search results in a new AI-generated layout to fulfill the intent of the user’s query more effectively.

We’ve refined our methods to optimize accuracy in Bing, applying those insights as we continue to evolve our use of LLMs in search. We are continuing to look closely at how generative search impacts traffic to publishers. Early data indicates that this experience maintains the number of clicks to websites and supports a healthy web ecosystem. The generative search experience is designed with this in mind, including retaining traditional search results and increasing the number of clickable links, like the references in the results. 

We are slowly rolling this out and will take our time, garner feedback, test and learn, and work to create a great experience before making this more broadly available.

https://blogs.bing.com/search/July-2024/generativesearch

DeepL launches LLM focused on translation quality and performance

DeepL, a global Language AI company, announced its next-generation language model, powered by a highly-specialized LLM technology built specifically for translation and editing to outperform competitors. The update enhances translation quality and performance in DeepL’s Language AI platform for businesses:

  • A specialized LLM: the DeepL solution leverages an LLM uniquely tuned for language, resulting in more human-like translations and writing for a variety of use cases with a reduced risk of hallucinations and misinformation.
  • Proprietary data: unlike general purpose models that simply train on the public internet, the DeepL model leverages over seven years of proprietary data specifically tuned for content creation and translation.
  • Human model tutoring: with a focus on quality, the DeepL model leverages thousands of hand-picked language experts specifically trained to “tutor” the model to best-in-class translation.

Translations using the next-gen model are available for DeepL Pro customers for translations in English, Japanese, German, and Simplified Chinese, with additional languages coming soon. The LLM can be activated within the web translator by selecting “next-gen model”. DeepL Pro users are protected by enterprise-grade security and compliance standards (ISO 27001 certification, GDPR/SOC 2 type 2 compliance), and no Pro translations are ever used to train its models.

https://www.deepl.com/en/press-release#2GYyHCVU8bjbu1iewCwLLx

TransPerfect introduces GlobalLink Live

TransPerfect, a provider of language and AI solutions for global business, announced the launch of GlobalLink Live, an AI-powered interpretation and accessibility platform that enables users to choose a mix of traditional interpretation and AI voice-to-voice translation to engage with live speech.

With GlobalLink Live multilingual capabilities, audiences can consume live speech—whether as part of a large event, small group meeting, or one-on-one encounter—in their preferred language regardless of the source language. Created as a turnkey solution to facilitate better language access for in-person, hybrid, and remote events, GlobalLink Live facilitates multiple options from traditional human interpretation to AI voice technology. Independent of location, subject area, budget, or local interpreter availability, event organizers have multiple options to ensure they can maximize engagement with their audiences.

Meetings and gatherings with attendees from differing language backgrounds, as well as service industry interactions between agents and customers who speak different languages, can all be facilitated through GlobalLink Live. The self-service platform can be used for on-site and remote speakers, and can be accessed by audiences attending in person or viewing remotely. Program owners can choose to deliver content via simultaneous human interpretation, live subtitles, or using AI-powered voice-to-voice dubbing technology.

https://globallink.transperfect.com

Elastic introduces Playground to accelerate RAG development with Elasticsearch

Elastic announced Playground, a low-code interface that enables developers to build RAG applications using Elasticsearch in minutes.

While prototyping conversational search, the ability to rapidly iterate on and experiment with key components of a RAG workflow (for example: hybrid search, or adding reranking) are important— to get  accurate and hallucination-free responses from LLMs.

Elasticsearch vector database and the Search AI platform provides developers with a wide range of capabilities such as comprehensive hybrid search, and to use innovation from a growing list of LLM providers. Our approach in our playground experience allows you to use the power of those features, without added complexity.

Playground’s intuitive interface allows you to A/B test different LLMs from model providers (like OpenAI and Anthropic) and refine your retrieval mechanism, to ground answers with your own data indexed into one or more Elasticsearch indices. The playground experience can leverage transformer models directly in Elasticsearch, but is also amplified with the Elasticsearch Open Inference API which integrates with a growing list of inference providers including Cohere and Azure AI Studio.

https://www.elastic.co/search-labs/blog/rag-playground-introduction

Franz announces AllegroGraph 8.2

Franz Inc., a supplier of Graph Database technology for Entity-Event Knowledge Graph Solutions, announced AllegroGraph 8.2, a Neuro-Symbolic AI Platform, with enhancements to ChatStream offering users a natural language query interface that provides more accurate and contextually relevant responses. ChatStream’s Graph RAG with Feedback enables more accurate, context-aware, and continuously evolving natural language queries, providing stateful and contextually relevant responses. Additional updates include:

Knowledge Graph-as-a-Service – A new hosted, free version grants users access to AllegroGraph with LLMagic via a web login.

Enhanced Scalability and Performance – AllegroGraph includes enhanced FedShard capabilities making the management of sharding more straightforward and user-friendly, reducing query response time and improving system performance.

New Web Interface – AllegroGraph includes a redesign of its web interface, AGWebView, that provides an intuitive way to interact with the platform, while co-existing with the Classic View.

Advanced Knowledge Graph Visualization – A new version of Franz’s graph visualization software, Gruff v9, is integrated into AllegroGraph. Gruff now includes the ChatStream Natural Language Query feature as a new means to query your Knowledge Graph and is a visualization tool that illustrates RDF-Star (RDF*) annotations, enabling users to add descriptions to edges in a graph – such as scores, weights, temporal aspects and provenance.

https://franz.com

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑