Curated for content, computing, and digital experience professionals

Category: Enterprise search & search technology (Page 2 of 59)

Research, analysis, and news about enterprise search and search markets, technologies, practices, and strategies, such as semantic search, intranet collaboration and workplace, ecommerce and other applications.

Before we consolidated our blogs, industry veteran Lynda Moulton authored our popular enterprise search blog. This category includes all her posts and other enterprise search news and analysis. Lynda’s loyal readers can find all of Lynda’s posts collected here.

For older, long form reports, papers, and research on these topics see our Resources page.

Perplexity introduces Perplexity Pages

Snippets from the Perplexity blog…

You’ve used Perplexity to search for answers, explore new topics, and expand your knowledge. Now, it’s time to share what you learned. Meet Perplexity Pages, your new tool for easily transforming research into visually stunning, comprehensive content. Whether you’re crafting in-depth articles, detailed reports, or informative guides, Pages streamlines the process so you can focus on sharing your knowledge with the world.

Pages lets you effortlessly create, organize, and share information. Search any topic, and instantly receive a well-structured, beautifully formatted article. Publish your work to our growing library of user-generated content and share it directly with your audience with a single click. What sets Perplexity Pages apart?

  • Customizable: Tailor the tone of your Page to resonate with your target audience, whether you’re writing for general readers or subject matter experts.
  • Adaptable: Easily modify the structure of your article—add, rearrange, or remove sections to best suit your material and engage your readers.
  • Visual: Elevate your articles with visuals generated by Pages, uploaded from your personal collection, or sourced online.

Pages is rolling out to users now. Log in to your Perplexity account and select “Create a Page” in the library tab.

https://www.perplexity.ai/page/new

Sinequa releases new generative AI assistants

Sinequa announced the availability of Sinequa Assistants; enterprise generative AI assistants that integrate with enterprise content and applications to augment and transform knowledge work. Sinequa’s Neural Search complements GenAI and provides the foundation for Sinequa’s Assistants. Its capabilities go beyond RAG’s conventional search-and-summarize paradigm to intelligently execute complex, multi-step activities, all grounded in facts to augment the way employees work.

Sinequa’s Assistants leverage all company content and knowledge to generate contextually-relevant insights and recommendations. Optimized for scale with three custom-trained small language models (SLMs), Sinequa Assistants help ensure accurate conversational responses on any internal topic, complete with citations and traceability to the original source.

Sinequa Assistants work with any public or private generative LLM, including Cohere, OpenAI, Google Gemini, Microsoft Azure Open AI, and Mistral. The Sinequa Assistant framework includes ready-to-go Assistants along with tools to define custom Assistant workflows so that customers can use an Assistant out of the box, or tailor and manage multiple Assistants from a single platform. These Assistants can be tailored to fit the needs of specific business scenarios and deployed and updated quickly without code or additional infrastructure. Domain-specific assistants scientists, engineers, lawyers, financial asset managers and others are available.

https://www.sinequa.com/company/press/sinequa-augments-companies-with-release-of-new-generative-ai-assistants

Tonic.ai launches secure unstructured data lakehouse for LLMs

Tonic.ai launched a secure data lakehouse for LLMs, Tonic Textual, to enable AI developers to securely leverage unstructured data for retrieval-augmented generation (RAG) systems and large language model (LLM) fine-tuning. Tonic Textual is a data platform designed to eliminate integration and privacy challenges ahead of RAG ingestion or LLM training bottlenecks. Leveraging its expertise in data management and realistic synthesis, Tonic.ai has developed a solution to tame and protect siloed, messy, and complex unstructured data into AI-ready formats ahead of embedding, fine-tuning, or vector database ingestion. With Tonic Textual: 

  1. Build, schedule, and automate unstructured data pipelines that extract and transform data into a standardized format convenient for embedding, ingesting into a vector database, or pre-training and fine-tuning LLMs. Textual supports TXT, PDF, CSV, TIFF, JPG, PNG, JSON, DOCX and XLSX out-of-the-box.
  2. Detect, classify, and redact sensitive information in unstructured data, and re-seed redactions with synthetic data to maintain the semantic meaning. Textual leverages proprietary named entity recognition (NER) models trained on a diverse data set spanning domains, formats, and contexts to ensure sensitive data is identified and protected.
  3. Enrich your vector database with document metadata and contextual entity tags to improve retrieval speed and context relevance in RAG systems.

https://www.tonic.ai/textual

DataStax to launch new Hyper-Converged Data Platform

DataStax announced the upcoming launch of DataStax HCDP (Hyper-Converged Data Platform), in addition to the upcoming release of DataStax Enterprise (DSE) 6.9. Both products enable customers to add generative AI and vector search capabilities to their self-managed, enterprise data workloads. DataStax HCDP is designed for modern data centers and Hyper-Converged Infrastructure (HCI) to support the breadth of data workloads and AI systems. It supports on-premises enterprise data systems built to AI-enable data and is designed for enterprise operators and architects.

The combination of OpenSearch’s Enterprise Search capabilities, with the high-performance vector search capabilities of the DataStax cloud-native, NoSQL Hyper-Converged Database, enables users to speed RAG and knowledge retrieval applications into production.

Hyper-converged streaming (HCS) built with Apache Pulsar is designed to provide data communications for a modern infrastructure. With native support of inline data processing and embedding, HCS brings vector data to the edge, allowing for faster response times and enabling event data for better contextual generative AI experiences.

HCDP provides rapid provisioning and data APIs built around the DataStax one-stop GenAI stack for enterprise retrieval-augmented generation (RAG), and it’s all built on the open-source Apache Cassandra platform.

https://www.datastax.com/press-release/datastax-launches-new-hyper-converged-data-platform-giving-enterprises-the-complete-modern-data-center-suite-ceeded-for-ai-in-production

Elastic announced Search AI Lake to scale low latency search

Elastic, a Search AI company, today announced Search AI Lake, a cloud-native architecture optimized for real-time, low-latency applications including search, retrieval augmented generation (RAG), observability and security. The Search AI Lake also powers the new Elastic Cloud Serverless offering. All operations, from monitoring and backup to configuration and sizing, are managed by Elastic – users just bring their data and choose Elasticsearch, Elastic Observability, or Elastic Security on Serverless. Benefits include:

  • Fully decoupling storage and compute enables scalability and reliability using object storage, dynamic caching supports high throughput, frequent updates, and interactive querying of large data volumes.
  • Multiple enhancements maintain query performance even when the data is safely persisted on object stores.
  • By separating indexing and search at a low level, the platform can automatically scale to meet the needs of a wide range of workloads.
  • Users can leverage a native suite of AI relevance, retrieval, and reranking capabilities, including a native vector database integrated into Lucene, open inference APIs, semantic search, and first- and third-party transformer models, which work with the array of search functionalities.
  • Elasticsearch’s query language, ES|QL, is built in to transform, enrich, and simplify investigations with fast concurrent processing irrespective of data source and structure.

https://ir.elastic.co/news/news-details/2024/Elastic-Announces-First-of-its-kind-Search-AI-Lake-to-Scale-Low-Latency-Search/default.aspx

SoundHound AI and Perplexity partner

SoundHound AI, Inc., a voice artificial intelligence vendor, announced it has partnered with Perplexity, the conversational AI-powered answer engine. Together they will bring Perplexity’s online LLM capabilities to SoundHound Chat AI – a voice assistant that utilizes hundreds of real-time domains, as well as generative AI responses. The SoundHound Chat AI assistant will leverage Perplexity to provide accurate, up-to-date responses to web-based queries that static LLMs cannot currently answer – expanding the type and complexity of the questions the assistant is able to handle.

For example, a user can ask a question like: “How does the price of gas this week compare to last week?” and the response will combine accurate, live information on gas prices with a comprehensive generative AI-style explanation that provides further context. The user can then follow-up with, “Navigate to the nearest gas station,” which uses SoundHound’s technology to seamlessly incorporate data from the appropriate sources and integrate with the navigation software of a device such as a car or a phone.

The assistant also utilizes a specially developed arbitration technology that uses a combination of software engineering and machine learning to intelligently select the more appropriate response, helping to minimize harmful “AI hallucinations.”

https://www.soundhound.com/newsroom/press-releases/soundhound-ai-and-perplexity-partner-to-bring-online-llms-to-its-next-gen-voice-assistants-across-cars-and-iot-devices/https://www.perplexity.ai

ThoughtSpot renames and adds features to ThoughtSpot Everywhere

ThoughtSpot, an AI-powered analytics company, today announced a series of initiatives for developers and product builders to help their customers, partners, and employees with generative AI and embedded natural language search, including a new pricing edition, a Vercel Marketplace listing, support channels, and new courses and certifications. 

ThoughtSpot has renamed the embedded solution, previously known as ThoughtSpot Everywhere to ThoughtSpot Embedded, reflecting ThoughtSpot’s vision to make analytics invisible – seamlessly embedded into every data application and user workflow – and its business outcomes visible. New features and offerings include: 

  • Developer Edition. The new Developer Edition offers developers exploring ThoughtSpot in free trial an opportunity to try ThoughtSpot Embedded capabilities with their specific use case for free for 12 months.  
  • Vercel Marketplace Integration. The new app listing for ThoughtSpot enables developers to quickly embed ThoughtSpot’s AI-powered analytics into their apps via the Vercel Marketplace.
  • Discord Channel. Developers can ask ThoughtSpot Embedded subject matter experts technical questions and receive guidance in our Discord community.
  • New ThoughtSpot Embedded Courses and Certifications. ThoughtSpot University is releasing a new paid certification for ThoughtSpot Embedded, the ThoughtSpot Embedded Developer. The new certification is for developers looking to attain formal recognition of their skills and knowledge in AI-Powered Analytics with ThoughtSpot Embedded.

https://www.thoughtspot.com/press-releases/thoughtspot-makes-embedding-ai-powered-analytics-easy-and-ubiquitous-for-everyone

Expert.ai launches Insight Engine for Life Sciences

Expert.ai, specialists in providing AI-powered language solutions to enterprises, today announced the launch of the expert.ai Insight Engine for Life Sciences.

For the world of drug research and development, data is both a challenge to be managed and an opportunity. The ability to effectively and quickly mine scientific and biomedical content for developing new drugs and to design and operate clinical trials is critical. The complexity of the diverse data sources that researchers depend on makes integrating, standardizing and analyzing them both challenging. Commercial licensing and data access restrictions, as well as the lack of granularity and different taxonomies used by common search tools complicate the process.

Advanced AI technologies provide the capability to mine and aggregate scientific content, synthesize knowledge, extract relevant information & reveal hidden correlations, helping researchers quickly access and analyze a vast amount of relevant information coming from biomedical and scientific literature, including full texts, speeding up the discovery and development of new drugs and therapies. Expert.ai Insight Engine for Life Sciences supports multiple use cases, including competitive intelligence, clinical trial design optimization, intellectual property protection, and research intelligence.

https://www.expert.ai/expert-ai-launches-insight-engine-for-life-sciences/

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑