Curated for content, computing, and digital experience professionals

Category: Enterprise search & search technology (Page 2 of 57)

Research, analysis, and news about enterprise search and search markets, technologies, practices, and strategies, such as semantic search, intranet collaboration and workplace, ecommerce and other applications.

Before we consolidated our blogs, industry veteran Lynda Moulton authored our popular enterprise search blog. This category includes all her posts and other enterprise search news and analysis. Lynda’s loyal readers can find all of Lynda’s posts collected here.

For older, long form reports, papers, and research on these topics see our Resources page.

StreamText updates Automatic Speech Recognition Caption platform

StreamText, enterprise caption platform, announced the latest release of Automatic Speech Recognition (ASR) technology powered by artificial intelligence (AI). With the ability to create captions directly from an audio source, StreamText ASR features term glossaries to help finetune captioning AI for specific events and increase overall accuracy. The platform offers direct integrations with meeting software such as Zoom and Adobe Connect. It also supports over 50 source languages, including variants of English, French, and Spanish. While the quality of human captioning is often more accurate than AI counterparts, it may not always apply to all captioning needs. In these cases, StreamText ASR is a solution. ASR is useful in university settings, classrooms, government administration, and broadcast media.

https://streamtext.net/automatic-captions/

Neo4j adds Vector Search within its Native Graph Database

Neo4j, a graph database and analytics company, announced that it has integrated native vector search as part of its core database capabilities. The result enables customers to achieve richer insights from semantic search and generative AI applications, and serve as long-term memory for LLMs, while reducing hallucinations.

Neo4j’s graph database can be used to create knowledge graphs, which capture and connect explicit relationships between entities, enabling AI systems to reason, infer, and retrieve relevant information effectively. The result ensures more accurate, explainable, and transparent outcomes for LLMs and other generative AI applications. By contrast, vector searches capture implicit patterns and relationships based on items with similar data characteristics, rather than exact matches, which are useful when searching for similar text or documents, making recommendations, and identifying other patterns.

This latest advancement follows from Neo4j’s recent product integration with Google Cloud’s generative AI features in Vertex AI in June, enabling users to transform unstructured data into knowledge graphs, which users can then query using natural language and ground their LLMs against factual set of patterns and criteria to prevent hallucinations.

https://neo4j.com/press-releases/neo4j-vector-search/

dtSearch updates enterprise products

dtSearch announced the release of version 2023.01 and beta release of version 2023.02 of its enterprise and developer product line for instantly searching terabytes of online and offline data. The product line’s proprietary document filters cover popular “Office” formats, website data, databases, compression formats, and emails with attachments. dtSearch products can run either “on premises” at organizations or in a cloud environment such as on Azure or AWS.

  • The release adds a new search results display for dtSearch’s enterprise products.
  • The beta adds sample code demonstrating use of the dtSearch Engine in an ASP.NET Core application running in a Windows (NanoServer) or Linux Docker container.
  • The beta adds sample code demonstrating use of the dtSearch Engine in an ASP.NET Core application running in a Windows or Linux Docker container.
  • The beta also adds sample code demonstrating how to build NuGet packages to deploy the dtSearch Engine with associated dependencies including ICU, CMAP files, stemming rules, and the external file parsers.

https://www.dtsearch.com

SearchStax launches SearchStax for Good

SearchStax, a cloud search platform enabling web teams to deliver search in an easy and cost-effective way, announced the launch of a new program, SearchStax for Good, that provides web and mobile development teams a frictionless way to simplify the management of Apache Solr workloads in the cloud.

SearchStax for Good is designed specifically for non-profits to help eliminate both the infrastructure management problem, as well as to remove the high barrier to entry from a budgetary perspective. By offering an extended no-cost period of full-featured service, SearchStax for Good enables qualifying non-profits a way to get search infrastructure up and running immediately, without having to figure out how to re-allocate budget, or needing to first get budget approval.

Upon the initial launch of SearchStax for Good at DrupalCon Pittsburgh 2023, SearchStax for Good will offer non-profit organizations 6 months free of SearchStax Cloud Serverless, a solution that delivers fast, scalable, and cost-effective Solr, thereby giving web and product teams the ability to build quickly and scale automatically while optimizing resource utilization. After the initial six-month period ends, participating organizations can continue using the service at a 40% discounted rate.

https://www.searchstax.com/ss-for-good

Snowflake acquires Neeva

From the Snowflake blog…

Search is fundamental to how businesses interact with data, and the search experience is evolving rapidly with new conversational paradigms emerging in the way we ask questions and retrieve information, enabled by generative AI. The ability for teams to discover precisely the right data point, data asset, or data insight is critical to maximizing the value of data.

That’s why Snowflake is acquiring Neeva, a search company founded to make search even more intelligent at scale. Neeva created a unique and transformative search experience that leverages generative AI and other innovations to allow users to query and discover data in new ways.

We plan to infuse and leverage these innovations across the Data Cloud to the benefit of our customers, partners and developers. Neeva allows us to tap into some of the most cutting-edge search technologies available to bring search and conversation in Snowflake to a new level.

As part of the acquisition, we are joined by some of the brightest minds working in search today. Neeva’s leadership and team members have been instrumental in the creation of numerous successful products like Google’s search advertising and YouTube monetization.

https://www.snowflake.com/blog/snowflake-acquires-neeva-to-accelerate-search-in-the-data-cloud-through-generative-ai/

Elastic unveils the Elasticsearch Relevance Engine

Elastic announced the launch of the Elasticsearch Relevance Engine (ESRE), with built-in vector search and transformer models, which is designed to bring AI innovation to proprietary enterprise data. ESRE enables companies create secure deployments to take advantage of all their proprietary structured and unstructured data.

Elastic has made investments in foundational AI capabilities to democratize AI and machine learning for developers with a Unified APIs for vector search, BM25f search and hybrid search, plus a transformer model small enough to fit on a laptop’s memory.

Using a relevance engine, like ESRE, allows companies to take advantage of all of their structured and unstructured data to build custom generative AI (GAI) apps, without having to worry about the size and cost of running large language models. The ability to “bring your own” transformer model and integrate with third-party transformer models allows organizations to create secure deployments that leverage GAI on their specific business data. With ESRE, the companies and community of users that have invested in Elastic solutions can advance AI initiatives right now without a lot of additional resources.

https://www.elastic.co/enterprise-search/generative-ai

Docugami announces integration with LlamaIndex

Docugami, a document engineering company that transforms how businesses create and execute critical business documents, announced an initial integration of LlamaIndex with Docugami, via the Llama Hub.

The LlamaIndex framework provides a flexible interface between a user’s information and Large Language Models (LLMs). Coupling LlamaIndex with Docugami’s ability to generate a Document XML Knowledge Graph representation of long-form Business Documents opens opportunities for LlamaIndex developers to build LLM applications that connect users to their own Business Documents, without being limited by document size or context window restrictions.

General purpose LLMs alone cannot deliver the accuracy needed for business, financial, legal, and scientific settings because they are trained on the public internet, which introduces a wide range of irrelevant and low-quality source materials. By contrast, Docugami is trained exclusively for business scenarios, for greater accuracy and reliability.

Systems aiming to understand the content of documents, such as retrieval and question-answering, will benefit from Docugami’s semantic Document XML Knowledge Graph Representation. Our unique approach to document chunking allows for better understanding and processing of your documents

https://www.docugami.com/blog/llamaindex

Sinequa enhances platform for scientific search and clinical trial data

AI-powered search provider Sinequa has announced domain-specific enhancements to its intelligent search platform for Scientific Search and Clinical Trial Data. Its search platform now utilizes new Neural Search and ChatGPT capabilities for faster, more effective discovery and decisions in drug development and clinical research. Sinequa will present these capabilities at the 2023 Bio-IT World Conference, May 16-18, at the Boston Convention and Exhibition Center, during conference sessions and at booth #803 in Auditorium Hall C. 

Combining the capabilities of Sinequa Neural Search – multiple deep learning and large language models for natural language understanding (NLU) – with the latest ChatGPT models through Azure OpenAI Service, Sinequa enables accurate, fast, traceable semantic search, insight generation, and summarization. Users can query and converse with a secure corpus of data, including proprietary life science systems, enterprise collaboration systems, and external data sources, to answer complex and nuanced questions. Comprehensive search results with high relevance and the ability to generate concise summaries enhance R&D intelligence, optimize clinical trials, and streamline regulatory workflows. 

https://www.sinequa.com/enterprise-search-for-industries/healthcare-life-science/

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑