Curated for content, computing, and digital experience professionals

Category: Computing & data (Page 13 of 83)

Computing and data is a broad category. Our coverage of computing is largely limited to software, and we are mostly focused on unstructured data, semi-structured data, or mixed data that includes structured data.

Topics include computing platforms, analytics, data science, data modeling, database technologies, machine learning / AI, Internet of Things (IoT), blockchain, augmented reality, bots, programming languages, natural language processing applications such as machine translation, and knowledge graphs.

Related categories: Semantic technologies, Web technologies & information standards, and Internet and platforms.

DataStax launches Data API to simplify GenAI application development

DataStax, a company that powers generative AI (GenAI) applications with relevant, scalable data, announced the general availability of its Data API, a one-stop API for GenAI, that provides all the data and a complete stack for production GenAI and retrieval augmented generation (RAG) applications with high relevancy and low latency. Also debuting today is a completely updated developer experience for DataStax Astra DB, a vector database for building production-level AI applications. 

The new vector Data API and experience makes Apache Cassandra available to JavaScript, Python, or full-stack application developers in a more intuitive experience for AI development. It is specifically designed for ease of use, offering higher relevancy, throughput, and fast response times, by using the JVector search engine. It introduces an intuitive dashboard, efficient data loading and exploration tools, and seamless integration with AI and machine learning (ML) frameworks. 

Developers can use the Data API for an out-of-the-box AI ecosystem that simplifies integrations with GenAI ecosystem leaders like LangChain, LLamaIndex, OpenAI, Vercel, Google Vertex AI, Amazon Bedrock, GitHub Copilot, Azure, and all major platforms while supporting security and compliance standards. Any developer can now support advanced RAG techniques such as FLARE and ReAct that must synthesize multiple responses.

https://www.datastax.com/blog/general-availability-data-api-for-enhanced-developer-experience

Ontotext’s GraphDB available on the Microsoft Azure Marketplace

Ontotext, a semantic knowledge graph provider, today announced that its flagship product, GraphDB, is now available on the Microsoft Azure Marketplace. Now, enterprises can streamline the global deployment of graph databases, facilitating the migration of on-premises data to Azure and other prominent public cloud platforms. Customers can take advantage of the Azure cloud platform, with streamlined deployment and management to ensure compliance with rigorous industry and privacy regulations. 

Ontotext GraphDB accelerates knowledge graph builds, and provides users with a platform for enterprise-wide data integration and discovery. GraphDB was developed for companies with decentralized data challenges and that require data driven analytics in order to drive insights for crucial business needs. GraphDB on Azure enables their joint customers to:

  • Remove data silos and speed up time to insights/time to market with a linking engine for enterprise data management.
  • Unify data sources for impactful data sharing, collaboration and semantic data discovery that delivers ROI on information architecture spend.
  • Empower standardized data exchange, discovery, integration, and reuse to provide 360 views of their business.

https://www.ontotext.com/company/news/ontotexts-graphdb-solution-is-now-available-on-the-microsoft-azure-marketplace/

Pinecone announces Pinecone Serverless

Vector database company Pinecone announced Pinecone Serverless, with a unique architecture and a serverless experience, to deliver cost reductions and eliminate infrastructure hassles, allowing companies to bring better GenAI applications to market faster. Companies can improve the quality of their GenAI applications and have a choice of LLMs just by making more data (or “knowledge”) available to the LLM. Pinecone Serverless includes:

  • Separation of reads, writes, and storage reduces costs for all types and sizes of workloads.
  • Architecture with vector clustering on top of blob storage provides low-latency, fresh vector search over practically unlimited data sizes at a low cost.
  • Indexing and retrieval algorithms built from scratch to enable fast and memory-efficient vector search from blob storage without sacrificing retrieval quality.
  • Multi-tenant compute layer provides efficient retrieval for thousands of users, on demand. This enables a serverless experience in which developers don’t need to provision, manage, or think about infrastructure, as well as usage-based billing that lets companies pay only for what they use.

Pinecone Serverless is launching with integrations to Anthropic, Anyscale, Cohere, Confluent, Langchain, Pulumi, and Vercel. Pinecone Serverless is available in public preview today in AWS cloud regions, and will be available thereafter on Azure and GCP.

https://www.pinecone.io/blog/serverless/

Typeface announces integration within Microsoft Dynamics 365

Typeface, a generative AI platform for enterprise content creation, and Microsoft today announced an AI-powered experience within Microsoft Dynamics 365 Customer Insights, a customer data platform and journey orchestration solution, aimed at transforming how marketers work by reducing the complexities of end-to-end campaign management and enhancing marketer productivity and ROI. 

To use this AI-powered experience in Dynamics 365 Customer Insights, marketing teams can simply type their desired campaign outcome in their own words or upload an existing brief. Copilot then responds by generating a central project board that recommends and connects everything needed for the campaign, including audience data, journey orchestration, and channels – all in the flow of work. While creating their campaign, marketers will have access to Typeface, so they can generate and curate on-brand content directly within Dynamics 365 Customer Insights. For Dynamics 365 Customer Insights customers, an early access public preview, will be released in the first quarter of 2024.

https://www.typeface.ai/blog/typeface-announces-integration-within-microsoft-dynamics-365-customer-insights-to-help-redefine-marketer-experiences

OpenAI introduces ChatGPT Team 

From the OpenAI blog…

We’re launching a new ChatGPT plan for teams of all sizes, which provides a secure, collaborative workspace to get the most out of ChatGPT at work…

ChatGPT Team offers access to our advanced models like GPT-4 and DALL·E 3, and tools like Advanced Data Analysis. It additionally includes a dedicated collaborative workspace for your team and admin tools for team management. As with ChatGPT Enterprise, you own and control your business data—we do not train on your business data or conversations, and our models don’t learn from your usage. More details on our data privacy practices can be found on our privacy page and Trust Portal. ChatGPT Team includes:

  • Access to GPT-4 with 32K context window
  • Tools like DALL·E 3, GPT-4 with Vision, Browsing, Advanced Data Analysis—with higher message caps
  • No training on your business data or conversations
  • Secure workspace for your team
  • Create and share custom GPTs with your workspace
  • Admin console for workspace and team management
  • Early access to new features and improvements

We recently announced GPTs—custom versions of ChatGPT that you can create for a specific purpose with instructions, expanded knowledge, and custom capabilities. These can be especially useful for businesses and teams. With GPTs, you can customize ChatGPT to your team’s specific needs and workflows (no code required) and publish them securely to your team’s workspace. GPTs can help with a wide range of tasks, such as assisting in project management, team onboarding, generating code, performing data analysis, securely taking action in your existing systems and tools, or creating collateral to match your brand tone and voice. Today, we announced the GPT Store where you can find useful and popular GPTs from your workspace.

ChatGPT Team costs $25/month per user when billed annually, or $30/month per user when billed monthly. You can explore the details or get started now by upgrading in your ChatGPT settings.

https://openai.com/blog/introducing-chatgpt-team

Axel Springer and OpenAI partner to deepen beneficial use of AI in journalism 

From Axel Springer…

Axel Springer and OpenAI have announced a global partnership to strengthen independent journalism in the age of artificial intelligence (AI). The initiative will enrich users’ experience with ChatGPT by adding recent and authoritative content on a wide variety of topics, and explicitly values the publisher’s role in contributing to OpenAI’s products. This marks a significant step in both companies’ commitment to leverage AI for enhancing content experiences and creating new financial opportunities that support a sustainable future for journalism.

With this partnership, ChatGPT users around the world will receive summaries of selected global news content from Axel Springer’s media brands including POLITICO, BUSINESS INSIDER, and European properties BILD and WELT, including otherwise paid content. ChatGPT’s answers to user queries will include attribution and links to the full articles for transparency and further information.

In addition, the partnership supports Axel Springer’s existing AI-driven ventures that build upon OpenAI’s technology. The collaboration also involves the use of quality content from Axel Springer media brands for advancing the training of OpenAI’s sophisticated large language models.

Mathias Döpfner, CEO of Axel Springer: “We are excited to have shaped this global partnership between Axel Springer and OpenAI – the first of its kind. We want to explore the opportunities of AI empowered journalism – to bring quality, societal relevance and the business model of journalism to the next level.”

Brad Lightcap, COO of OpenAI: “This partnership with Axel Springer will help provide people with new ways to access quality, real-time news content through our AI tools. We are deeply committed to working with publishers and creators around the world and ensuring they benefit from advanced AI technology and new revenue models.”

https://www.axelspringer.com/en/ax-press-release/axel-springer-and-openai-partner-to-deepen-beneficial-use-of-ai-in-journalism

Franz unveils AllegroGraph 8.0

Franz Inc. announced AllegroGraph 8.0, a Neuro-Symbolic AI Platform that incorporates Large Language Model (LLM) components directly into SPARQL along with vector generation and vector storage for a comprehensive AI Knowledge Graph solution. AllegroGraph 8.0 combines Machine Learning (statistical AI) with knowledge and reasoning (symbolic AI) capabilities to solve problems that require reasoning and learn efficiently with less data, expanding applicability and produces decisions understandable to humans. AllegroGraph 8.0 includes: 

  • Retrieval Augmented Generation (RAG) for LLMs – AllegroGraph 8.0 guides Generative AI content through RAG, feeding LLMs with the ‘source of truth.’ This approach helps avoid ‘hallucinations’ by grounding the output in fact-based knowledge.
  • Natural Language Queries and Reasoning – The LLMagic functions serve as the bridge between human language and machine understanding, offering a dynamic natural language interface for querying and reasoning processes.
  • Enterprise Document Deep-insight – New VectorStore capabilities offer a bridge between enterprise documents and Knowledge Graphs, allowing users to access knowledge hidden within documents.
  • AI Symbolic Rule Generation – AllegroGraph offers built-in rule-based system capabilities tailored for symbolic reasoning, distilling complex data into actionable, interpretable rules.
  • Streamlined Ontology and Taxonomy Creation – LLMagic can streamline the complex and often labor-intensive task of crafting ontologies and taxonomies for any topic.

https://allegrograph.com/new-allegrograph-v8-neuro-symbolic-ai-platform/

Neo4j collaborating with AWS to enhance generative AI results

Neo4j announced a multi-year Strategic Collaboration Agreement (SCA) with Amazon Web Services (AWS) to enable enterprises to achieve better generative artificial intelligence (AI) outcomes through a combination of knowledge graphs and native vector search that reduces generative AI hallucinations while making results more accurate, transparent, and explainable.

Neo4j also anncounced a new integration with Amazon Bedrock, a managed service that makes foundation models from AI companies accessible via an API to build and scale generative AI applications. Neo4j’s native integration with Amazon Bedrock enables:

  1. Reduced Hallucinations: Neo4j with Langchain and Amazon Bedrock can now work together using Retrieval Augmented Generation (RAG) to create virtual assistants grounded in enterprise knowledge.
  2. Personalized experiences: Neo4j’s context-rich knowledge graphs integration with Amazon Bedrock can invoke an ecosystem of foundation models that generate personalized text generation and summarization for end users.
  3. Get complete answers during real-time search: Developers can leverage Amazon Bedrock to generate vector embeddings from unstructured data (text, images, and video) and enrich knowledge graphs using Neo4j’s new vector search and store capability.
  4. Kickstart a knowledge graph creation: Developers can leverage new generative AI capabilities using Amazon Bedrock to process unstructured data so it becomes structured and load it into a knowledge graph.

https://neo4j.com/press-releases/neo4j-aws-bedrock-integration/

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑