Curated for content, computing, and digital experience professionals

Author: NewsShark (Page 1 of 719)

Couchbase adds vector search to database platform

Couchbase, Inc. cloud database platform company, introduced vector search as a new feature in Couchbase Capella Database-as-a-Service (DBaaS) and Couchbase Server to help businesses bring to market a new adaptive applications that engage users in a hyper-personalized and contextualized way. The new feature offers vector search optimized for running onsite, across clouds, to mobile and IoT devices at the edge, so organizations can run adaptive applications anywhere.

While vector-only databases aim to solve the challenges of processing and storing data for LLMs, having multiple standalone solutions adds complexity to the enterprise IT stack and slows application performance. Couchbase’s multipurpose capabilities deliver a simplified architecture to improve the accuracy of LLM results. Couchbase also makes it easier and faster for developers to build such applications with a single SQL++ query using the vector index, removing the need to use multiple indexes or products. With vector search as a feature across all Couchbase products, customers gain:

  • Similarity and hybrid search, combining text, vector, range and geospatial search capabilities in one.
  • RAG to make AI-powered applications more accurate, safe and timely.
  • Enhanced performance because all search patterns can be supported within a single index to lower response latency.

IBM announces availability of open-source Mistral AI Model on watsonx 

IBM announced the availability of the popular open-source Mixtral-8x7B large language model (LLM), developed by Mistral AI, on its watsonx AI and data platform, as it continues to expand capabilities to help clients innovate with IBM’s own foundation models and those from a range of open-source providers.

The addition of Mixtral-8x7B expands IBM’s open, multi-model strategy to meet clients where they are and give them choice and flexibility to scale enterprise AI solutions across their businesses.

Mixtral-8x7B was built using a combination of Sparse modeling — a technique that finds and uses only the most essential parts of data to create more efficient models — and the Mixture-of-Experts technique, which combines different models (“experts”) that specialize in and solve different parts of a problem. The Mixtral-8x7B model is widely known for its ability to rapidly process and analyze vast amounts of data to provide context-relevant insights.

This week, IBM also announced the availability of ELYZA-japanese-Llama-2-7b, a Japanese LLM model open-sourced by ELYZA Corporation, on watsonx. IBM also offers Meta’s open-source models Llama-2-13B-chat and Llama-2-70B-chat and other third-party models on watsonx.,-Expands-Model-Choice-to-Help-Enterprises-Scale-AI-with-Trust-and-Flexibility

Flux launches full release of WordPress on decentralized platform

Flux, a global decentralized technology company specializing in cloud infrastructure, cloud computing, artificial intelligence and decentralized storage, officially launched WordPress on its platform following a successful Beta phase that commenced in February 2023, making the most popular content management system (CMS), WordPress, accessible on Flux’s decentralized infrastructure.

The backbone of this innovative offering lies in Flux’s extensive network, encompassing not only enterprise-grade nodes, but also nodes ranging from individual users to data centers, designed to guarantee peak performance for WordPress sites. This infrastructure addresses crucial web metrics such as bounce rate and conversion rates, which are significantly affected by loading speeds.

Flux’s decentralized WordPress solution offers an efficient, cost-effective, and scalable hosting option. The ease of deploying a WordPress site on Flux, demanding minimal technical expertise, ensures accessibility for a wider audience, which is reinforced by extensive support resources helping to make the technology user-friendly and empowering for all.

Key features include geolocation capabilities for optimizing site performance based on user concentration, alongside built-in redundancy to guarantee maximum availability. With a four-tiered pricing plan that competes favorably with traditional hosting services, Flux delivers web hosting for all project sizes and budgets with accessibility at its core.

Algolia adds Looking Similar capability to AI Recommendations

Algolia launched a new ‘Looking Similar’ capability as part of its AI Recommendations solution. Looking Similar is an AI model that analyzes images in a retailer’s catalog to find and recommend other items that are visually similar. This new image-based feature is easy to implement and configure, and can enhance conversion rates by providing shoppers with a more visual browsing experience. 

With Looking Similar, users can find items faster that fit a specific theme, vibe, style, mood, or space as a shopper might visually explore products in a brick-and-mortar store. These visual recommendations are particularly useful for shoppers when they come across out of stock items, are simply looking for inspiration, or find a style they like but want differently priced options.

Retailers and marketplaces can implement Algolia’s Looking Similar, analyze a catalog, and quickly generate hundreds of recommendations. These recommendations can be further refined based on a number of preferred attributes such as ‘color’, ‘price’, and ‘size’.

Looking Similar provides retailers with control and empowers them to establish thresholds for the “Similarity” of image matches to create custom filters, specify the number of recommendations to be displayed, and ensure a specific level of image match similarity.

DataStax and LlamaIndex partner to make building RAG applications easier

DataStax announced its retrieval augmented generation (RAG) solution, RAGStack, is now generally available powered by LlamaIndex as an open source framework, in addition to LangChain. DataStax RAGStack for LlamaIndex also supports an integration (currently in public preview) with LlamaIndex’s LlamaParse, which gives developers using Astra DB an API to parse and transform complex PDFs into vectors in minutes. 

LlamaIndex is a framework for ingesting, indexing, and querying data for building generative AI applications and addresses the ingestion pipelines needed for enterprise-ready RAG. LlamaParse is LlamaIndex’s new offering that targets enterprise developers building RAG over complex PDFs; it enables clean extraction of tables by running recursive retrieval, promising more accurate parsing of the complex documents often found in business.

RAGStack with LlamaIndex offers a solution tailored to address the challenges encountered by enterprise developers in implementing RAG solutions. Benefits include a curated Python distribution available on PyPI for integration with Astra DB, DataStax Enterprise (DSE), and Apache Cassandra, and a live RAGStack test matrix and GenAI app templates.

Users can use LlamaIndex alone, or in combination with LangChain and their ecosystem including LangServe, LangChain Templates, and LangSmith.

Acquia enhances brand management capabilities

Acquia announced new integrations for its digital asset management solution, Acquia DAM, that expand its brand management capabilities. These integrations — with Acquia Campaign Studio, Adobe Stock, and Google Translate reduce the complexity of maintaining a consistent brand experience across digital channels.

Acquia DAM is now integrated with Acquia Campaign Studio, the company’s marketing automation solution. The integration leverages Acquia’s instant search connector tool, so once a user is authenticated in the DAM connector within Campaign Studio, they can search, view, and select the asset of their choice within Campaign Studio’s email and landing page builders. Pictures in email and landing page builders dynamically change when updated in Acquia DAM.

An Adobe Stock integration automatically syncs a customer’s newly licensed Adobe Stock assets with Acquia DAM, bringing in essential metadata and offering smoother workflows. Creative pros can choose which types of Adobe Stock assets to monitor and sync, and the integration handles file copying and categorization in Acquia DAM. Customers can now use Google Translate to automatically translate text from selected metadata fields within Acquia DAM. The DAM automatically repopulates these fields with translated content in up to 20 languages.

Adobe announces AI Assistant in Reader and Acrobat

Adobe introduced AI Assistant in beta, a new generative AI-powered conversational engine in Reader and Acrobat. Integrated into Reader and Acrobat workflows, AI Assistant instantly generates summaries and insights from long documents, answers questions and formats information for sharing in emails, reports and presentations.

AI Assistant leverages the same artificial intelligence and machine learning models behind Acrobat Liquid Mode, technology that supports responsive reading experiences for PDFs on mobile. These proprietary models provide a deep understanding of PDF structure and content, enhancing quality and reliability in AI Assistant outputs.

Acrobat Individual, Pro and Teams customers and Acrobat Pro trialists can use the AI Assistant beta to work more productively today. No complicated implementations required. Simply open Reader or Acrobat and start working with the new capabilities.

Reader and Acrobat customers will have access to the full range of AI Assistant capabilities through a new add-on subscription plan when AI Assistant is out of beta. Until then, the new AI Assistant features are available in beta for Acrobat Standard and Pro Individual and Teams subscription plans on desktop and web in English, with features coming to Reader desktop customers in English over the next few weeks at no additional cost.

Ontotext releases Ontotext Metadata Studio 3.7

Ontotext, a provider of enterprise knowledge graph (EKG) technology and semantic database engines, announced the availability of Ontotext Metadata Studio (OMDS) 3.7, an all-in-one environment that facilitates the creation, evaluation, and quality improvement of text analytics services. This latest release provides out-of-the-box, rapid natural language processing (NLP) prototyping and development so organizations can iteratively create a text analytics service that best serves their domain knowledge. 

As part of Ontotext’s AI-in-Action initiative, which helps data scientists and engineers benefit from the AI capabilities of its products, the latest version enables users to tag content with Common English Entity Linking (CEEL), text analytics service. CEEL is trained to tag mentions of people, organizations, and locations to their representation in Wikidata – the public knowledge graph that includes close to 100 million entity instances. With OMDS, organizations can recognize approximately 40 million Wikidata concepts, and  streamline information extraction from text and enrichment of databases and knowledge graphs. Organizations can:

  • Automate tagging and categorization of content to facilitate more efficient discovery, reviews, and knowledge synthesis. 
  • Enrich content, achieve precise search, improve SEO, and enhance the performance of LLMs and downstream analytics.
  • Streamline information extraction from large volumes of unstructured content and analyze market trends.

« Older posts

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑