Curated for content, computing, data, information, and digital experience professionals

Author: NewsShark (Page 1 of 746)

AODocs launches External Portals to securely streamline document collaboration

AODocs, an enterprise document management platform, announced the launch of AODocs External Portals, to enable organizations to collaborate with external parties while maintainin document control and audit-ready traceability.

With External Portals, organizations can share documents, request files, and track progress in one unified space—eliminating reliance on out-of-control shadow IT, ad hoc email threads, consumer file-sharing links, or disconnected third-party tools.

The new capability is designed to simplify external collaboration while remaining fully integrated with AODocs’ document control platform, preserving the rigorous governance, security, and compliance enterprises require from their DMS (Document Management System).

As organizations increasingly work with vendors, customers, auditors, and other third parties, document exchange has become a point of risk. Files are duplicated across inboxes and unmanaged drives, processes sprawl across multiple unapproved tools, and audit trails break.

AODocs External Portals reduce these “shadow IT” risks by keeping all external exchanges within a single, unified platform—where external collaboration and document control processes live together in a governed, traceable, and policy-consistent environment.

External Portals are designed for high-stakes, document-heavy processes —from vendor onboarding and loan processing to HR workflows and case management— allowing organizations to simplify external collaboration while preserving security, compliance, and traceability.

https://www.aodocs.com/products/document-management-system

Flux voice AI platform now supports on-the-fly configurations

Deepgram announced Flux “on-the-fly configuration” for its voice AI platform, which lets developers dynamically update speech recognition settings — such as keyterms and end-of-turn detection — during a live voice conversation without disconnecting or restarting the audio stream.

A support call moves from identity verification to troubleshooting to scheduling a follow-up. A healthcare call shifts from intake questions to medication names to billing. Each phase has different intents, different critical phrases.

Today, teams configure their ASR (automatic speech recognition) once at connection time and live with it for the entire call. They load every keyterm they might need upfront, diluting biasing effectiveness across the board, or they keep the list minimal and accept lower accuracy on critical phrases. When the conversation shifts enough that the configuration truly doesn’t fit, the options are disconnecting and reconnecting mid-call or managing multiple concurrent streams and swapping between them.

Now your ASR configuration can shift with the conversation. No more choosing between loading every keyterm upfront or accepting lower accuracy. No more static configuration that’s “good enough” for the whole call. One connection that adapts as the call unfolds.

On-the-fly configuration is available now in the Flux v2 WebSocket API.

https://deepgram.com/learn/flux-on-the-fly-configuration

Research Solutions launches Scite MCP, connecting ChatGPT, Claude, & other AI tools to scientific literature

Research Solutions, provider of AI-powered scientific research tools, launched Scite MCP, which enables researchers and developers to search scientific literature and evaluate the trustworthiness of research findings without leaving the AI tools they already use.

Large language models can generate text on most topics, but coverage of scholarly material is limited, and they struggle to distinguish well-supported findings from contested ones.

Scite MCP solves this by giving AI tools direct access to over 250 million indexed articles, book chapters, preprints, and datasets, along with Scite’s proprietary Smart Citations, which classify each citation as supporting, mentioning, or contrasting findings it references.

  • Answers grounded in trustworthy research: AI tools connected to Scite can return responses backed by specific, verifiable papers rather than generating unsourced claims
  • Citation context: Users and AI agents can see not only that a paper was cited, but also whether subsequent research supported, mentioned, or contrasted its findings
  • Broad literature coverage: Access to over 250 million scientific articles, book chapters, preprints, and datasets
  • Works across tools: Compatible with ChatGPT, Claude, Microsoft Copilot, Cursor, Claude Code, and any MCP-enabled application

Scite MCP currently provides access to Open Access articles, with publisher discussions underway to expand coverage to paywalled content.

https://researchsolutions.investorroom.com/2026-02-26-Research-Solutions-Launches-Scite-MCP,-Connecting-ChatGPT,-Claude,-Other-AI-Tools-To-Scientific-Literature

Siteimprove expands its agentic content intelligence platform

Siteimprove released its latest AI agent capabilities. The updates include conversational analytics enabling non-technical users to get answers, generate reports, and dashboards using natural language. Customers also gain new content accessibility coverage for PDF and Images, and keyword intelligence for Search in the world of “Answer Engine Optimization (AEO)”.

These capabilities help customers meet digital accessibility regulations such as Americans with Disabilities Act (ADA) and European Accessibility Act (EAA) while helping brands improve discoverability across answer engines and generative engines. Capabilities include:

  • Conversational Analytics Agent: Ask questions in natural language and instantly get answers to understand what matters across analytics data – democratizing insights across teams. Teams can quickly task the agent to generate answers on campaign performance, funnel diagnostics, and recommended targets for course correction.
  • PDF and Image Accessibility Agent: PDF Validate and Contextual Image Analysis agent surfaces accessibility issues before content goes live, helping teams reduce risk earlier in the content lifecycle. This helps customers increase accessibility coverage across more content types.
  • Keyword Intelligence Agent: Expanded keyword and topic intelligence agent uncovers competitive and topical gaps, giving teams deeper insight into growth opportunities for both traditional and AI-driven search in the world of AEO.

https://www.siteimprove.com/press/siteimprove-expands-its-agentic-content-intelligence-platform

Krisp launches real-time Voice Translation SDK

Krisp announced the launch of its Voice Translation SDK, enabling CX platform developers to embed real-time multilingual voice-to-voice translation into live customer conversations. The technology has been live in production CX environments since 2025 as part of Krisp’s Call Center AI platform, operating in customer conversations globally before its SDK release.

Real-time voice translation must operate on continuous audio streams where latency, accuracy and conversational flow are tightly linked. Systems must recognize diverse accents, perform reliably in noisy environments and preserve natural turn-taking.

Krisp’s Voice Translation SDK is engineered to balance these competing constraints in live, two-way conversations. It supports any combination of over 60 languages and is optimized for synchronous interactions where clarity and conversational continuity are critical. This enables multilingual interactions within live conversations without requiring human interpreters.

The SDK is available for Windows, macOS and Web developers, allowing integration into both native and browser-based applications. To improve performance in real-world conditions, Krisp applies local Noise Cancellation before audio is processed in the cloud, isolating the primary speaker and improving recognition accuracy. The SDK also supports custom vocabulary and domain-specific dictionaries, enabling teams to enforce terminology and maintain consistency across professional environments.

https://krisp.ai/blog/real-time-voice-translation-sdk/

Dataiku launches 575 Lab, its new open source initiative for responsible AI

As AI moves from pilots to business-critical deployment, the issue is no longer access. It’s trust. Open source tools support that trust by keeping core components inspectable and standardizable, enabling stronger oversight across modern AI systems. Today, Dataiku announced the launch of the 575 Lab, Dataiku’s Open Source Office. The 575 Lab will release two new open-source toolkits designed to help enterprises make AI systems more transparent, governable, and fit for real-world use.

The 575 Lab will focus on delivering deployable tools that strengthen explainability, privacy, and governance across modern AI and agentic systems. The two initial open-source projects will be: 

  • Agent Explainability Tools that will help teams trace and understand decision-making across multi-step agent workflows, making agent decisions transparent for data scientists, compliance teams, and end users.
  • Privacy-Preserving Proxies that will enable safer use of closed-source models by protecting sensitive data end-to-end, and that teams will be able to run locally.

Both projects will be designed to support responsible enterprise AI, with a focus on reliability, security, transparency, and explainability.

The 575 Lab is now available to the community of AI specialists, data scientists, and developers responsible for creating, deploying, and scaling AI agents and applications.

https://www.dataiku.com/press-releases/dataiku-launches-575-lab/

Graphwise announced the immediate availability of GraphRAG

Graphwise announced the availability of Graphwise GraphRAG, a low-code AI-workflow engine designed to turn “Python prototypes” into production-grade systems instantly. It is based on a trusted semantic layer that reduces hallucinations and delivers precise and verifiable answers. GraphRAG unites LLMs, enterprise data, structured knowledge, and multiple search methods to deliver transparent, verifiable, enterprise-ready answers. Unlike standard RAG that “flattens” data into chunks leading to lost relationships and hallucinations, GraphRAG treats the knowledge graph as a trusted semantic backbone, ensuring AI responses are grounded in verifiable enterprise facts and complex relationships. Graphwise bridges the gap between complex enterprise data and functional AI agents. Features include:

  • Low-Code Visual Engine democratizes AI, enabling subject matter experts to adjust AI logic visually.
  • Out-of-the-Box Templates provide guardrails and support query expansion that deliver the fastest time-to-value.
  • Semantic Metadata Control Plane eliminates hallucinations and improves AI accuracy. AI responses are grounded in an organization’s “enterprise truth,” reducing risk.
  • Explainability and Provenance Panels support regulatory compliance. Built-in traceability affords transparency into how an AI response was produced.
  • Visual Debugging and Monitoring reduce maintenance costs by eliminating black box code.
  • SKOS-style Concept Enrichment harnesses domain-specific intelligence. This means AI understands company specific jargon, acronyms, and synonyms out-of-the-box.

https://graphwise.ai/news/new-graphrag-solution-moves-beyond-vector-only-rag-knowledge-graphs-provide-context-and-common-sense-to-ai

300-node clusters now supported in CockroachDB

From the CockroachDB Blog…

As AI-driven and agentic applications push data platforms into new territory, data architects are increasingly forced to choose between correctness, simplicity, and scale. To remove that tradeoff we’re announcing support for 300-node clusters with 2.2M tpmC and 1.2PB of data in CockroachDB v25.4.4 and beyond. Also, On CockroachDB Cloud, we’re announcing support for 64 vCPU per node. All customers will be able to self-serve and select these larger instance types if desired.

Highlights include:

  • ~610K QPS, which when compared to PUA on a 9-node cluster with 17K QPS shows that CockroachDB near linearly scales with the size of the cluster.
  • Compared to a previous run on 25.2, a run with the same amount of imported data on 25.4 took 30% less storage space than the previous run and enhanced compression.
  • Imports for this run on 25.4 were 2× faster compared to 25.1, for migrations to CockroachDB.
  • ADD COLUMN across 120 B rows completed without regression.
  • 330TB backup and 6 concurrent changefeeds completed in 2 hours and 40 min with no impact on foreground traffic.

Start with $400 in free credits. Or get a free 30-day trial of CockroachDB Enterprise on self-hosted environments.

https://www.cockroachlabs.com/blog/300-node-clusters-supported-cockroachdb

« Older posts

© 2026 The Gilbane Advisor

Theme by Anders NorenUp ↑