Curated for content, computing, data, information, and digital experience professionals

Category: Content technology news (Page 1 of 642)

Curated information technology news for content technology, computing, and digital experience professionals. News items are edited to remove hype, unhelpful jargon, iffy statements, and quotes, to create a short summary — mostly limited to 200 words — of the important facts with a link back to a useful source for more information. News items are published using the date of the original source here and in our weekly email newsletter.

We focus on product news, but also include selected company news such as mergers and acquisitions and meaningful partnerships. All news items are edited by one of our analysts under the NewsShark byline.  See our Editorial Policy.

Note that we also publish news on X/Twitter. Follow us  @gilbane

Krisp launches real-time Voice Translation SDK

Krisp announced the launch of its Voice Translation SDK, enabling CX platform developers to embed real-time multilingual voice-to-voice translation into live customer conversations. The technology has been live in production CX environments since 2025 as part of Krisp’s Call Center AI platform, operating in customer conversations globally before its SDK release.

Real-time voice translation must operate on continuous audio streams where latency, accuracy and conversational flow are tightly linked. Systems must recognize diverse accents, perform reliably in noisy environments and preserve natural turn-taking.

Krisp’s Voice Translation SDK is engineered to balance these competing constraints in live, two-way conversations. It supports any combination of over 60 languages and is optimized for synchronous interactions where clarity and conversational continuity are critical. This enables multilingual interactions within live conversations without requiring human interpreters.

The SDK is available for Windows, macOS and Web developers, allowing integration into both native and browser-based applications. To improve performance in real-world conditions, Krisp applies local Noise Cancellation before audio is processed in the cloud, isolating the primary speaker and improving recognition accuracy. The SDK also supports custom vocabulary and domain-specific dictionaries, enabling teams to enforce terminology and maintain consistency across professional environments.

https://krisp.ai/blog/real-time-voice-translation-sdk/

Dataiku launches 575 Lab, its new open source initiative for responsible AI

As AI moves from pilots to business-critical deployment, the issue is no longer access. It’s trust. Open source tools support that trust by keeping core components inspectable and standardizable, enabling stronger oversight across modern AI systems. Today, Dataiku announced the launch of the 575 Lab, Dataiku’s Open Source Office. The 575 Lab will release two new open-source toolkits designed to help enterprises make AI systems more transparent, governable, and fit for real-world use.

The 575 Lab will focus on delivering deployable tools that strengthen explainability, privacy, and governance across modern AI and agentic systems. The two initial open-source projects will be: 

  • Agent Explainability Tools that will help teams trace and understand decision-making across multi-step agent workflows, making agent decisions transparent for data scientists, compliance teams, and end users.
  • Privacy-Preserving Proxies that will enable safer use of closed-source models by protecting sensitive data end-to-end, and that teams will be able to run locally.

Both projects will be designed to support responsible enterprise AI, with a focus on reliability, security, transparency, and explainability.

The 575 Lab is now available to the community of AI specialists, data scientists, and developers responsible for creating, deploying, and scaling AI agents and applications.

https://www.dataiku.com/press-releases/dataiku-launches-575-lab/

Graphwise announced the immediate availability of GraphRAG

Graphwise announced the availability of Graphwise GraphRAG, a low-code AI-workflow engine designed to turn “Python prototypes” into production-grade systems instantly. It is based on a trusted semantic layer that reduces hallucinations and delivers precise and verifiable answers. GraphRAG unites LLMs, enterprise data, structured knowledge, and multiple search methods to deliver transparent, verifiable, enterprise-ready answers. Unlike standard RAG that “flattens” data into chunks leading to lost relationships and hallucinations, GraphRAG treats the knowledge graph as a trusted semantic backbone, ensuring AI responses are grounded in verifiable enterprise facts and complex relationships. Graphwise bridges the gap between complex enterprise data and functional AI agents. Features include:

  • Low-Code Visual Engine democratizes AI, enabling subject matter experts to adjust AI logic visually.
  • Out-of-the-Box Templates provide guardrails and support query expansion that deliver the fastest time-to-value.
  • Semantic Metadata Control Plane eliminates hallucinations and improves AI accuracy. AI responses are grounded in an organization’s “enterprise truth,” reducing risk.
  • Explainability and Provenance Panels support regulatory compliance. Built-in traceability affords transparency into how an AI response was produced.
  • Visual Debugging and Monitoring reduce maintenance costs by eliminating black box code.
  • SKOS-style Concept Enrichment harnesses domain-specific intelligence. This means AI understands company specific jargon, acronyms, and synonyms out-of-the-box.

https://graphwise.ai/news/new-graphrag-solution-moves-beyond-vector-only-rag-knowledge-graphs-provide-context-and-common-sense-to-ai

300-node clusters now supported in CockroachDB

From the CockroachDB Blog…

As AI-driven and agentic applications push data platforms into new territory, data architects are increasingly forced to choose between correctness, simplicity, and scale. To remove that tradeoff we’re announcing support for 300-node clusters with 2.2M tpmC and 1.2PB of data in CockroachDB v25.4.4 and beyond. Also, On CockroachDB Cloud, we’re announcing support for 64 vCPU per node. All customers will be able to self-serve and select these larger instance types if desired.

Highlights include:

  • ~610K QPS, which when compared to PUA on a 9-node cluster with 17K QPS shows that CockroachDB near linearly scales with the size of the cluster.
  • Compared to a previous run on 25.2, a run with the same amount of imported data on 25.4 took 30% less storage space than the previous run and enhanced compression.
  • Imports for this run on 25.4 were 2× faster compared to 25.1, for migrations to CockroachDB.
  • ADD COLUMN across 120 B rows completed without regression.
  • 330TB backup and 6 concurrent changefeeds completed in 2 hours and 40 min with no impact on foreground traffic.

Start with $400 in free credits. Or get a free 30-day trial of CockroachDB Enterprise on self-hosted environments.

https://www.cockroachlabs.com/blog/300-node-clusters-supported-cockroachdb

Snowflake makes enterprise data AI-ready with Snowflake Postgres

Snowflake, an AI Data Cloud company, announced advancements that make data AI-ready by design, allowing enterprises to rely on data that is continuously available, usable, and governed as AI transitions from experimentation into production systems. With new enhancements to Snowflake Postgres, the database now runs natively in the AI Data Cloud so enterprises can consolidate their transactional, analytical, and AI use cases onto a single, secure platform. To help ensure AI systems are trusted at enterprise scale, Snowflake is embedding enhanced interoperability, governance, and resilience features into its platform.

Powered by pg_lake, a set of PostgreSQL extensions that allow Postgres to easily work within an organization’s open and interoperable lakehouse grounded in Apache Iceberg, enterprises can leverage Snowflake Postgres to directly query, manage, and write to Apache Iceberg tables using standard SQL. This capability is delivered within a Postgres environment, so enterprises can eliminate data movement between transactional and analytical systems.

Enterprises need data that remains open, governed, and resilient as it flows across engines, formats, and environments. Snowflake is expanding how customers access, share, and govern their data. Open Format Data Sharing extends Snowflake’s zero-ETL sharing model to include formats such as Apache Iceberg and Delta Lake.

https://www.snowflake.com/en/news/press-releases/snowflake-makes-enterprise-data-ai-ready-with-snowflake-postgres-and-advanced-innovations-for-open-data-interoperability

Elastic adds high-precision multilingual reranking to new Elastic Inference Service

Elastic, a Search AI Company, made two Jina Rerankers available on Elastic Inference Service (EIS), a GPU-accelerated inference-as-a-service that makes it easy to run fast, high-quality inference without complex setup or hosting. These rerankers bring low-latency, high-precision multilingual reranking to the Elastic ecosystem.

Rerankers improve search quality by reordering results based on semantic relevance, helping surface the most accurate matches for a query. They improve relevance across aggregated, multi-query results, without reindexing or pipeline changes. This makes them valuable for hybrid search, RAG, and context-engineering workflows where better context boosts downstream accuracy. The two new Jina reranker models are optimized for different production needs:

Jina Reranker v2 (jina-reranker-v2-base-multilingual)
Built for scalable, agentic workflows.

  • Low-latency inference with strong multilingual performance.
  • Ability to select relevant SQL tables and external functions that best match user queries..
  • Scores documents independently to handle arbitrarily large candidate sets.

Jina Reranker v3 (jina-reranker-v3)
Optimized for high-precision shortlist reranking.

  • Optimized for low-latency inference and efficient deployment in production settings.
  • Strong multilingual performance; maintains stable top-k rankings under permutation.
  • Cost-efficient, cross-document reranking: v3 reranks up to 64 documents together in a single inference call, reasoning across the full candidate set to improve ordering when results are similar or overlapping.

https://ir.elastic.co/news/news-details/2026/Elastic-Adds-High-Precision-Multilingual-Reranking-to-Elastic-Inference-Service-with-Jina-Models/default.aspx

Upland announces BA Insight Platform with integrated AI search experiences for enterprises

Upland Software, Inc., a provider of AI-powered knowledge and content management software, announced the Upland BA Insight Platform. The new BA Insight Platform incorporates SmartHub, ConnectivityHub, AutoClassifier, Smart Preview, and Connectors to deliver search experiences that are more connected, more contextual, and more actionable. Features include:

  • Knowledge Graphs to deliver deeper, connected, and more contextualized insights by mapping relationships across complex datasets
  • Agentic Retrieval-Augmented Generation (RAG) to provide more accurate answers to complex questions through conversational AI interfaces
  • Amazon Q Business Integration to enable users to connect and perform generative actions against organizational content via the seamless AI-powered assistant

BA Insight introduces native integrations with Amazon Q Business and AWS generative AI assistant, enabling organizations to unlock conversational search and gain actionable insights across all content sources, securely. By working closely with AWS, BA Insight ensures customers benefit from seamless deployment, robust security, and continuous innovation, empowering organizations to maximize the value of their information and accelerate their AI journey. The unified BA Insight enterprise search and AI enablement platform is available in AWS Marketplace.

https://investor.uplandsoftware.com/news/news-details/2026/New-Upland-BA-Insight-Platform-Delivers-Integrated-AI-Search-Experiences-for-Enterprises/default.aspx

DeepL launches voice API for real-time speech transcription and translation for instant multilingual communication

DeepL, a global AI product and research company, announced the general availability of DeepL Voice API. Developers can now integrate real-time voice transcription and translation capabilities into their applications, enhancing multilingual support for businesses.

The DeepL Voice API allows businesses to stream audio and receive transcriptions in the source language, along with translations into up to five target languages. The API provides a seamless experience, so language barriers do not hinder effective communication.

The DeepL API enables: 

  • Hire for expertise, not language coverage DeepL Voice API lets contact centers staff agents who understand the customer issue and the business context, even when they do not speak the customer’s language.
  • Expand talent pools while managing costs By reducing the need for language specific staffing, teams can centralize or distribute support more flexibly, which can lower operating costs and improve coverage planning.
  • Provide reliable coverage in urgent moments Real time translation helps teams maintain service levels during nights, weekends, and holidays, when fewer specialized language agents are available.
  • Two way understanding, not just text on screen Agents can follow the conversation through live translated audio, alongside on screen transcription and translation, so they can respond naturally and confidently in the moment.

https://www.deepl.com/en/press-release/deepl_launches_voice_api_for_real_time_speech_transcription_and_translation

« Older posts

© 2026 The Gilbane Advisor

Theme by Anders NorenUp ↑