Curated for content, computing, and digital experience professionals

Category: Enterprise search & search technology (Page 4 of 60)

Research, analysis, and news about enterprise search and search markets, technologies, practices, and strategies, such as semantic search, intranet collaboration and workplace, ecommerce and other applications.

Before we consolidated our blogs, industry veteran Lynda Moulton authored our popular enterprise search blog. This category includes all her posts and other enterprise search news and analysis. Lynda’s loyal readers can find all of Lynda’s posts collected here.

For older, long form reports, papers, and research on these topics see our Resources page.

Bridgeline announces Zeus launch with Concept and Image Search

Bridgeline Digital, Inc., a provider of marketing software, announced the release date of its HawkSearch “Smart Search” technology, set for March 15, 2024. Smart Search introduces two new ways to search, Concept and Visual Search, changing how customers interact with search and leading to increased engagement and revenue for businesses.

HawkSearch will introduce its “Zeus” update for Smart Search and GenAI capabilities on March 15, 2024. This update introduces AI-powered Concept and Visual Search to help HawkSearch customers generate more revenue. Smart Search uses AI models, vector databases, and large language models to process customer queries, whether typed or through images. For instance, Visual Search allows a customer to upload an image of a product, and HawkSearch will show similar items for purchase. Concept Search lets users describe their needs in natural language, and the system finds relevant products or information.

https://www.bridgeline.com

Couchbase adds vector search to database platform

Couchbase, Inc. cloud database platform company, introduced vector search as a new feature in Couchbase Capella Database-as-a-Service (DBaaS) and Couchbase Server to help businesses bring to market a new adaptive applications that engage users in a hyper-personalized and contextualized way. The new feature offers vector search optimized for running onsite, across clouds, to mobile and IoT devices at the edge, so organizations can run adaptive applications anywhere.

While vector-only databases aim to solve the challenges of processing and storing data for LLMs, having multiple standalone solutions adds complexity to the enterprise IT stack and slows application performance. Couchbase’s multipurpose capabilities deliver a simplified architecture to improve the accuracy of LLM results. Couchbase also makes it easier and faster for developers to build such applications with a single SQL++ query using the vector index, removing the need to use multiple indexes or products. With vector search as a feature across all Couchbase products, customers gain:

  • Similarity and hybrid search, combining text, vector, range and geospatial search capabilities in one.
  • RAG to make AI-powered applications more accurate, safe and timely.
  • Enhanced performance because all search patterns can be supported within a single index to lower response latency.

https://www.couchbase.com/blog/announcing-vector-search/

Algolia adds Looking Similar capability to AI Recommendations

Algolia launched a new ‘Looking Similar’ capability as part of its AI Recommendations solution. Looking Similar is an AI model that analyzes images in a retailer’s catalog to find and recommend other items that are visually similar. This new image-based feature is easy to implement and configure, and can enhance conversion rates by providing shoppers with a more visual browsing experience. 

With Looking Similar, users can find items faster that fit a specific theme, vibe, style, mood, or space as a shopper might visually explore products in a brick-and-mortar store. These visual recommendations are particularly useful for shoppers when they come across out of stock items, are simply looking for inspiration, or find a style they like but want differently priced options.

Retailers and marketplaces can implement Algolia’s Looking Similar, analyze a catalog, and quickly generate hundreds of recommendations. These recommendations can be further refined based on a number of preferred attributes such as ‘color’, ‘price’, and ‘size’.

Looking Similar provides retailers with control and empowers them to establish thresholds for the “Similarity” of image matches to create custom filters, specify the number of recommendations to be displayed, and ensure a specific level of image match similarity.

https://www.algolia.com/about/news/algolia-unveils-new-looking-similar-capability-elevating-shopping-experiences-with-image-based-recommendations/

Mindbreeze and Ariza Content Solutions partner

Mindbreeze, a provider of appliances and cloud services in the field of information insight, and Ariza Content Solutions have created a partnership to employ and enhance insights & search experiences and provide companies with content management.

While traditional content workflows encompassed content creation, through management of editorial and production processes, to final publication and delivery to various distribution channels, today content creators are learning the value of connected, searchable data to allow for deeper connections in their content. To accomplish this goal, Ariza employs solutions incorporating AI-driven insight engines like Mindbreeze InSpire. Insight engines enhance the search experience giving customers, both external and internal access to knowledge and insight into their content.

Mindbreeze InSpire, an insight engine, uses traditional search methods and sophisticated data analysis approaches to interpret business information and answer critical business questions. Equipped with machine learning and AI capabilities, the Mindbreeze InSpire solution provides a foundation for successful enterprise knowledge management.

https://inspire.mindbreeze.comhttps://www.arizacs.com

Elastic unveils Elasticsearch Query Language (ES|QL)

Elastic the company behind Elasticsearch, today announced Elasticsearch Query Language (ES|QL), its new piped query language designed to transform, enrich and simplify data investigation with concurrent processing. ES|QL enables site reliability engineers (SREs), developers and security professionals to perform data aggregation and analysis across a variety of data sources from a single query. 

Over the last two decades, the data landscape has become more fragmented, opaque, and complex, driving the need for greater productivity and efficiency among developers, security professionals, and observability practitioners. Organizations need tools and services that offer iterative workflow, a broad range of operations, and central management to make security and observability professionals more productive. Elasticsearch Query Language key benefits include:

  • Delivers a comprehensive and iterative approach to data investigation with ES|QL piped query syntax.
  • Improves speed and efficiency regardless of data’s source or structure with a new ES|QL query engine that leverages concurrent processing.
  • Streamlines observability and security workflows with a single user interface, which allows users to search, aggregate and visualize data from a single screen.

ES|QL is currently available as a technical preview. The general availability version, scheduled for release in 2024, will include additional features to further streamline data analysis and decision-making.

https://www.elastic.co

Sinequa integrates enterprise search with Google’s Vertex AI

Enterprise Search provider Sinequa announced it has expanded its partnership with Google Cloud by adding its generative AI capabilities to Sinequa’s supported integrations. By combining the conversational abilities of Google Cloud’s Vertex AI platform with the factual knowledge provided by Sinequa’s intelligent search platform, businesses can use generative AI and gain insights from their enterprise content. 

Sinequa’s approach to generative AI is agnostic, ensuring compatibility with all major generative AI APIs. Sinequa support to Google Cloud’s Vertex AI platform and its expanding library of large language models (LLMs) such as PaLM-2, enables Sinequa users to leverage Google Cloud’s generative AI technologies for Retrieval-Augmented Generation (RAG) within their existing Sinequa ecosystem.

In combination with generative AI, Sinequa’s Neural Search uses the most relevant information across all your content to ground generative AI in the truth of your enterprise’s knowledge. With search and generative AI together, you can engage in dialogue with your information just as you would talk with a knowledgeable colleague, and without concerns present with generative AI alone, such as hallucinations or security. This means you can converse with your content: conduct research, ask questions, explore nuances, all with more accurate, relevant results.

https://www.sinequa.com

OpenLink Software introduces the OpenLink Personal Assistant

From the OpenLink blog…

We are pleased to announce the immediate availability of the OpenLink Personal Assistant, a practical application of Knowledge Graph-driven Retrieval Augmented Generation (RAG) showcasing the power of knowledge discovery and exploration enabled by a modern conversational user interface. This modern approach revitalizes the enduring pursuit of high-performance, secure data access, integration, and management by harnessing the combined capabilities of Large Language Models (LLMs), Knowledge Graphs, and RAG, all propelled by declarative query languages such as SPARQL, SQL, and SPASQL (SPARQL inside SQL).

GPT 4.0 and 3.5-turbo foundation models form the backbone of the OpenLink Assistant, offering a sophisticated level of conversational interaction. These models can interpret context, thereby providing a user experience that intuitively emulates aspects of human intelligence.

What truly sets OpenLink Assistant apart is state-of-the-art RAG technology, integrated seamlessly with SPARQL, SQL, and SPASQL (SPARQL inside SQL). This fusion, coupled with our existing text indexing and search functionality, allows for real-time, contextually relevant data retrieval from domain-specific knowledge bases deployed as knowledge graphs.

  1. Self-Describing, Self-Supporting Products: OpenLink Assistant adds a self-describing element to our Virtuoso, ODBC & JDBC Drivers products by simply installing the Assistant’s VAD (Virtuoso Application Distro) package.
  2. OpenAPI-Compliance: With YAML and JSON description documents, OpenLink Assistant offers hassle-free integration into existing systems. Any OpenAPI compliant service can be integrated into its conversation processing pipeline while also exposing core functionality to other service consumer apps.
  3. Device Compatibility: Whether you’re on a desktop or a mobile device, OpenLink Assistant delivers a seamless interaction experience.
  4. UI Customization: The Assistant can be skinned to align with your application’s UI, ensuring a cohesive user experience.
  5. Versatile Query Support: With support for SQL, SPARQL, and SPASQL, OpenLink Assistant can interact with a multitude of data, information, and knowledge sources.

https://medium.com/openlink-software-blog/introducing-the-openlink-personal-assistant-e74a76eb2bed

MongoDB announces new Atlas Vector Search capabilities

MongoDB announced new capabilities, performance improvements, and a data-streaming integration for MongoDB Atlas Vector Search.

Developers can more easily aggregate and filter data, improving semantic information retrieval and reducing hallucinations in AI-powered applications. With new performance improvements for MongoDB Atlas Vector Search, the time it takes to build indexes is reduced to help accelerate application development. Additionally, MongoDB Atlas Vector Search is now integrated with fully managed data streams from Confluent Cloud to make it easier to use real-time data from a variety of sources to power AI applications.

MongoDB Atlas Vector Search provides the functionality of a vector database integrated as part of a unified developer data platform, allowing teams to store and process vector embeddings alongside virtually any type of data to more quickly and easily build generative AI applications.

https://www.mongodb.com/press/new-mongodb-atlas-vector-search-capabilities-help-developers-build-and-scale-ai-applications

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑