Curated for content, computing, and digital experience professionals

Category: Enterprise search & search technology (Page 2 of 60)

Research, analysis, and news about enterprise search and search markets, technologies, practices, and strategies, such as semantic search, intranet collaboration and workplace, ecommerce and other applications.

Before we consolidated our blogs, industry veteran Lynda Moulton authored our popular enterprise search blog. This category includes all her posts and other enterprise search news and analysis. Lynda’s loyal readers can find all of Lynda’s posts collected here.

For older, long form reports, papers, and research on these topics see our Resources page.

Microsoft introduces Bing generative search

From The Microsoft Bing Blog…

… Today, we’re excited to share an early view of our new generative search experience which is currently shipping to a small percentage of user queries …

This new experience combines the foundation of Bing’s search results with the power of large and small language models (LLMs and SLMs). It understands the search query, reviews millions of sources of information, dynamically matches content, and generates search results in a new AI-generated layout to fulfill the intent of the user’s query more effectively.

We’ve refined our methods to optimize accuracy in Bing, applying those insights as we continue to evolve our use of LLMs in search. We are continuing to look closely at how generative search impacts traffic to publishers. Early data indicates that this experience maintains the number of clicks to websites and supports a healthy web ecosystem. The generative search experience is designed with this in mind, including retaining traditional search results and increasing the number of clickable links, like the references in the results. 

We are slowly rolling this out and will take our time, garner feedback, test and learn, and work to create a great experience before making this more broadly available.

https://blogs.bing.com/search/July-2024/generativesearch

Elastic introduces Playground to accelerate RAG development with Elasticsearch

Elastic announced Playground, a low-code interface that enables developers to build RAG applications using Elasticsearch in minutes.

While prototyping conversational search, the ability to rapidly iterate on and experiment with key components of a RAG workflow (for example: hybrid search, or adding reranking) are important— to get  accurate and hallucination-free responses from LLMs.

Elasticsearch vector database and the Search AI platform provides developers with a wide range of capabilities such as comprehensive hybrid search, and to use innovation from a growing list of LLM providers. Our approach in our playground experience allows you to use the power of those features, without added complexity.

Playground’s intuitive interface allows you to A/B test different LLMs from model providers (like OpenAI and Anthropic) and refine your retrieval mechanism, to ground answers with your own data indexed into one or more Elasticsearch indices. The playground experience can leverage transformer models directly in Elasticsearch, but is also amplified with the Elasticsearch Open Inference API which integrates with a growing list of inference providers including Cohere and Azure AI Studio.

https://www.elastic.co/search-labs/blog/rag-playground-introduction

Franz announces AllegroGraph 8.2

Franz Inc., a supplier of Graph Database technology for Entity-Event Knowledge Graph Solutions, announced AllegroGraph 8.2, a Neuro-Symbolic AI Platform, with enhancements to ChatStream offering users a natural language query interface that provides more accurate and contextually relevant responses. ChatStream’s Graph RAG with Feedback enables more accurate, context-aware, and continuously evolving natural language queries, providing stateful and contextually relevant responses. Additional updates include:

Knowledge Graph-as-a-Service – A new hosted, free version grants users access to AllegroGraph with LLMagic via a web login.

Enhanced Scalability and Performance – AllegroGraph includes enhanced FedShard capabilities making the management of sharding more straightforward and user-friendly, reducing query response time and improving system performance.

New Web Interface – AllegroGraph includes a redesign of its web interface, AGWebView, that provides an intuitive way to interact with the platform, while co-existing with the Classic View.

Advanced Knowledge Graph Visualization – A new version of Franz’s graph visualization software, Gruff v9, is integrated into AllegroGraph. Gruff now includes the ChatStream Natural Language Query feature as a new means to query your Knowledge Graph and is a visualization tool that illustrates RDF-Star (RDF*) annotations, enabling users to add descriptions to edges in a graph – such as scores, weights, temporal aspects and provenance.

https://franz.com

Snowflake announces enhancements to Snowflake Cortex AI, Snowflake ML, and more

Snowflake announced new innovations and enhancements to Snowflake Cortex AI to unlock the next wave of enterprise AI for customers to create AI-powered applications. This includes new chat experiences, which help organizations develop chatbots so they can talk directly to their enterprise data and get the answers they need faster. In addition, Snowflake is democratizing how any user can customize AI for specific industry use cases through a new no-code interactive interface, access to large language models (LLMs), and serverless fine-tunings. Snowflake is also accelerating the path for operationalizing models with an integrated experience for machine learning (ML) through Snowflake ML, enabling developers to build, discover, and govern models and features across the ML lifecycle. Snowflake’s unified platform for generative AI and ML allows every part of the business to extract value from their data.

Snowflake is unveiling two new chat capabilities, Snowflake Cortex Analyst and Snowflake Cortex Search, allowing users to develop these chatbots in a matter of minutes against structured and unstructured data, without operational complexity. Cortex Analyst, built with Meta’s Llama 3 and Mistral Large models, allows businesses to build applications on top of their analytical data in Snowflake. Other announced enhancements include Snowflake Copilot, Cortex Guard, Document AI, and Hybrid Tables.

https://www.snowflake.com/news/snowflake-brings-industry-leading-enterprise-ai-to-even-more-users-with-new-advancements-to-snowflake-cortex-ai-and-snowflake-ml

Ontotext announces Metadata Studio 3.8

Ontotext, a provider of enterprise knowledge graph and semantic database engines, announced the latest version of Ontotext Metadata Studio (OMDS), a tool designed for knowledge graph enrichment through text analytics of unstructured documents. Version 3.8 aids in the creation, evaluation, and quality improvement of text analytics services. With more intuitive and effective search solution capabilities, enhancement to OMDS removes the difficulties users face when exposing semantic search over their documents, especially when they are working with their own, custom reference domain models. Updates include:

  • Enhanced Domain Model Search Interface transforms the reference annotation schema into a user-friendly search interface, allowing exploration and retrieval of content based on the preferred domain data model.
  • Knowledge Graph Enrichment and Extension enables users to reuse their domain models so they can be leveraged for advanced analytics and quality management.
  • Advanced Search Capabilities supports all types of searches. The solution allows users to conduct simple searches such as identifying documents containing specific text as well as complex queries that filter documents based on the presence or absence of certain text and combinations of metadata objects and property values.
  • Improved Usability and Workflow Efficiency enables users to organize content effortlessly by moving documents between corpora or deleting them from the database.

https://www.ontotext.com/products/ontotext-metadata-studio/

Perplexity introduces Perplexity Pages

Snippets from the Perplexity blog…

You’ve used Perplexity to search for answers, explore new topics, and expand your knowledge. Now, it’s time to share what you learned. Meet Perplexity Pages, your new tool for easily transforming research into visually stunning, comprehensive content. Whether you’re crafting in-depth articles, detailed reports, or informative guides, Pages streamlines the process so you can focus on sharing your knowledge with the world.

Pages lets you effortlessly create, organize, and share information. Search any topic, and instantly receive a well-structured, beautifully formatted article. Publish your work to our growing library of user-generated content and share it directly with your audience with a single click. What sets Perplexity Pages apart?

  • Customizable: Tailor the tone of your Page to resonate with your target audience, whether you’re writing for general readers or subject matter experts.
  • Adaptable: Easily modify the structure of your article—add, rearrange, or remove sections to best suit your material and engage your readers.
  • Visual: Elevate your articles with visuals generated by Pages, uploaded from your personal collection, or sourced online.

Pages is rolling out to users now. Log in to your Perplexity account and select “Create a Page” in the library tab.

https://www.perplexity.ai/page/new

Sinequa releases new generative AI assistants

Sinequa announced the availability of Sinequa Assistants; enterprise generative AI assistants that integrate with enterprise content and applications to augment and transform knowledge work. Sinequa’s Neural Search complements GenAI and provides the foundation for Sinequa’s Assistants. Its capabilities go beyond RAG’s conventional search-and-summarize paradigm to intelligently execute complex, multi-step activities, all grounded in facts to augment the way employees work.

Sinequa’s Assistants leverage all company content and knowledge to generate contextually-relevant insights and recommendations. Optimized for scale with three custom-trained small language models (SLMs), Sinequa Assistants help ensure accurate conversational responses on any internal topic, complete with citations and traceability to the original source.

Sinequa Assistants work with any public or private generative LLM, including Cohere, OpenAI, Google Gemini, Microsoft Azure Open AI, and Mistral. The Sinequa Assistant framework includes ready-to-go Assistants along with tools to define custom Assistant workflows so that customers can use an Assistant out of the box, or tailor and manage multiple Assistants from a single platform. These Assistants can be tailored to fit the needs of specific business scenarios and deployed and updated quickly without code or additional infrastructure. Domain-specific assistants scientists, engineers, lawyers, financial asset managers and others are available.

https://www.sinequa.com/company/press/sinequa-augments-companies-with-release-of-new-generative-ai-assistants

Tonic.ai launches secure unstructured data lakehouse for LLMs

Tonic.ai launched a secure data lakehouse for LLMs, Tonic Textual, to enable AI developers to securely leverage unstructured data for retrieval-augmented generation (RAG) systems and large language model (LLM) fine-tuning. Tonic Textual is a data platform designed to eliminate integration and privacy challenges ahead of RAG ingestion or LLM training bottlenecks. Leveraging its expertise in data management and realistic synthesis, Tonic.ai has developed a solution to tame and protect siloed, messy, and complex unstructured data into AI-ready formats ahead of embedding, fine-tuning, or vector database ingestion. With Tonic Textual: 

  1. Build, schedule, and automate unstructured data pipelines that extract and transform data into a standardized format convenient for embedding, ingesting into a vector database, or pre-training and fine-tuning LLMs. Textual supports TXT, PDF, CSV, TIFF, JPG, PNG, JSON, DOCX and XLSX out-of-the-box.
  2. Detect, classify, and redact sensitive information in unstructured data, and re-seed redactions with synthetic data to maintain the semantic meaning. Textual leverages proprietary named entity recognition (NER) models trained on a diverse data set spanning domains, formats, and contexts to ensure sensitive data is identified and protected.
  3. Enrich your vector database with document metadata and contextual entity tags to improve retrieval speed and context relevance in RAG systems.

https://www.tonic.ai/textual

« Older posts Newer posts »

© 2025 The Gilbane Advisor

Theme by Anders NorenUp ↑