Curated for content, computing, and digital experience professionals

Category: Computing & data (Page 47 of 90)

Computing and data is a broad category. Our coverage of computing is largely limited to software, and we are mostly focused on unstructured data, semi-structured data, or mixed data that includes structured data.

Topics include computing platforms, analytics, data science, data modeling, database technologies, machine learning / AI, Internet of Things (IoT), blockchain, augmented reality, bots, programming languages, natural language processing applications such as machine translation, and knowledge graphs.

Related categories: Semantic technologies, Web technologies & information standards, and Internet and platforms.

Confluent launches Q3 ʼ21 Release

Confluent, Inc. announced the Confluent Q3 ʼ21 Release, with features that help organizations share data between different environments, integrate with business-critical applications, and store data for digital customer experiences and data-driven backend operations.

As a part of the Q3 ʼ21 Release, ksqlDB pull queries are now generally available in Confluent Cloud. Together with push queries, ksqlDB enables a broad class of end-to-end stream processing workloads without the need to work across multiple systems to build streaming applications.

Confluent has expanded its library of managed Kafka. The Salesforce Platform Events Source connector enables organizations to unlock customer data and share it with downstream data warehouses and applications. The Azure Cosmos DB Sink connector helps companies migrate to a modern, cloud-native database to enable high performance and automatic scaling, unlocking real-time use cases.

Confluent re-architected Kafka to work in the cloud. Infinite Storage is now generally available for Google Cloud, after initially launching for AWS. Organizations can now retain all the real-time and historical data they need without pre-provisioning additional infrastructure or running the risk of paying for unused storage.

https://www.confluent.io/

Moonwalk Universal updates support for IBM Spectrum Discover

Moonwalk Universal, a specialist in large-scale data management solutions, announced Moonwalk version 12.12 provides enhanced metadata and content inspection capabilities to streamline AI workflows and integrate heterogenous data environments for IBM Spectrum Discover.

IBM Spectrum Discover is an advanced data cataloging and metadata management system that provides content insight for exabyte-scale unstructured data. IBM Spectrum Discover and Moonwalk connect to multiple file and object storage systems on-premises and in the cloud. The solution has been designed to rapidly ingest, consolidate and index metadata for billions of files and objects, providing a unified metadata layer on top of heterogenous storage environments. A unified metadata layer with custom and automated tagging, enables data scientists, storage administrators, and data stewards to manage, classify and gain insights from massive amounts of unstructured data. The insights gained accelerate large-scale analytics, improve storage economics, and help with risk mitigation to create competitive advantage and speed research.

Moonwalk’s latest update also includes integration and support for the latest file systems and servers, cloud and object stores, including Windows Server 2019, NetApp and Isilon, IBM Cloud Object Store, RStor, Wasabi, Amazon S3, Azure, Google Cloud, Hitachi HCP, Dell EMC ECS, Scality RING, Caringo Swarm and Cloudian HyperStore.

https://ibm.moonwalkinc.com/spectrum-discover

OpenAI improves OpenAI Codex

OpenAI announced an improved version of OpenAI Codex, an AI system that translates natural language to code, and are releasing it through our API in private beta starting today. Codex is the model that powers GitHub Copilot, which we built and launched in partnership with GitHub a month ago. Proficient in more than a dozen programming languages, Codex can now interpret simple commands in natural language and execute them on the user’s behalf—making it possible to build a natural language interface to existing applications. We are now inviting businesses and developers to build on top of OpenAI Codex through our API.

OpenAI Codex is a descendant of GPT-3; its training data contains both natural language and billions of lines of source code from publicly available sources, including code in public GitHub repositories. OpenAI Codex is most capable in Python, but it is also proficient in over a dozen languages including JavaScript, Go, Perl, PHP, Ruby, Swift and TypeScript, and even Shell.

GPT-3’s main skill is generating natural language in response to a natural language prompt, meaning the only way it affects the world is through the mind of the reader. OpenAI Codex has much of the natural language understanding of GPT-3, but it produces working code—meaning you can issue commands in English to any piece of software with an API. We’re now making OpenAI Codex available in private beta via our API, and we are aiming to scale up as quickly as we can safely. During the initial period, OpenAI Codex will be offered for free.

https://openai.com/blog/openai-codex/

MarkLogic announces Solution Accelerators for Data Hub for Medicaid

MarkLogic Corporation announced the first two accelerators available from the Medicaid Accelerator Program launched earlier this year. The MarkLogic Solution Accelerator for FHIR is for existing customers looking to comply with the CMS Interoperability Rule directly out of MarkLogic. The accelerator is an open source framework for enabling FHIR Interoperability in the MarkLogic Data Hub for Medicaid. It connects with HAPI FHIR to enable FHIR-compliant queries to be executed directly to MarkLogic.

The FHIR accelerator’s open source framework exemplifies the ease with which FHIR queries can be issued to the MarkLogic Data Hub for Medicaid by combining querying with full text search in one consolidated platform. The FHIR accelerator is extensible and can be readily adapted to other FHIR solutions such as AWS FHIRWorks. The MarkLogic Starter Kit Solution Accelerator is for new customers starting their Medicaid modernization efforts using the MarkLogic Data Hub for Medicaid. The accelerator is a Medicaid Integration Platform using FHIR-Friendly persistent data models and contains a sample-project with data models, test data, data mappings, MPI configuration, deployment scripts, and unit tests. It provides Claim, Provider, and Member data models as a starting point for a customer’s Medicaid modernization efforts.

https://www.marklogic.com

Couchbase releases Couchbase Server 7

Couchbase, Inc. announced the availability of Couchbase Server 7. This release bridges the best aspects of relational databases like ACID transactions with the flexibility of a modern database.

Customers can execute business transactions within their customer-facing applications, develop customer 360 data models and applications, and execute plans to modernize relational-based applications to the cloud. Development teams can more easily make the transition from relational databases to Couchbase’s modern database without needing to re-train team members as the platform supports the programming languages they already use. Highlights: 

  • Eliminating of database sprawl by adding mature SQL transaction capabilities. Customers no longer need both a relational database and a NoSQL database. Couchbase now has multi-statement SQL transactions by fusing together transactions and high-volume interactions.
  • Enabling runtime updates with zero downtime through a dynamic data containment model. Couchbase Server 7 introduces schema and table-like organizing structures, called “scopes and collections,” within the schemaless database. With Couchbase Server 7 can customers add a table (the “collection”) in Couchbase, while transactions are happening without having to add or modify the schema (the “scope”) or take down the database for this upgrade.
  • Faster operational performance that lowers the total cost of ownership facilitated by collection-level processing of data access, partitioning and index isolation.

https://www.couchbase.com

Datadobi unveils Mobility Engine for unstructured data

The company’s Version 5.12, brings together a range of components that essentially creates what officials called a “step over the threshold” from Datadobi’s DobiMigrate — a data migration solution — and DobiProtect (data protection) and into a complete data mobility engine that can address the scale and challenges inherent in large storage environments. The goal of the vendor-neutral data mobility engine is to give enterprises the means for managing unstructured data including images, videos, audio, emails, texts, social media content, spreadsheets, streaming data, and data from Internet of Things (IoT) devices.

Databobi engineers reworked the file access layer, enabling the engine’s NFS and SMB file access layers to focus on data copying and file system integrity verification in both the data center and the cloud, which tend to house products from myriad vendors. They used low-level optimizations in the NFS and SMB stack to drive more efficient data and metadata processing through pipelining of the protocol communication and parallelizing file access workloads across multiple servers. The engine comes with what the company dubs its Integrity Enforcement Technology layer, a chain-of-custody technology that makes vendor-neutral data mobility more reliable and improves the preservation of data and metadata integrity.

https://datadobi.com(via Enterprise Storage Forum)

The Apache Cassandra Project releases Apache Cassandra v4.0

The Apache Cassandra Project released today v4.0 of Apache Cassandra, the Open Source distributed Big Data database management platform. A NoSQL database, Apache Cassandra handles massive amounts of data across load-intensive applications with high availability and no single point of failure. Cassandra v4.0 handles unstructured data, with thousands of writes per second. New features:

  • Increased speed and scalability – streams data faster during scaling operations and throughput on reads and writes, that delivers a more elastic architecture, particularly in Cloud and Kubernetes deployments.
  • Improved consistency – keeps data replicas in sync to optimize incremental repair for faster, more efficient operation and consistency across data replicas.
  • Enhanced security and observability – audit logging tracks users access and activity with minimal impact to workload performance. New capture and replay enables analysis of production workloads to help ensure regulatory and security compliance with SOX, PCI, GDPR, or other requirements.
  • New configuration settings – exposed system metrics and configuration settings provides flexibility for operators to ensure they have easy access to data that optimize deployments.
  • Minimized latency – garbage collector pause times are reduced to a few milliseconds with no latency degradation as heap sizes increase.
  • Better compression – improved compression efficiency eases unnecessary strain on disk space and improves read performance.

https://cassandra.apache.org/

Adobe launches Adobe Analytics to advance digital literacy

Adobe announced the Adobe Analytics curriculum for education, a global program that supports the future workforce with in-demand data science skills. As part of the next generation of the Adobe Education Exchange, college instructors and students will be able to use Adobe Analytics, a customer data analytics platform, for free and get access to course curriculum with hands-on activities. Students will learn how to use data to drive business decisions and gain skills for careers spanning data science to marketing and product management.

Participants get access to a sandbox environment, which allows students to use Adobe Analytics with rich demo data. It is meant to be self-paced. instructors can pick and choose modules to incorporate. The four modules:

  • Data Collection: Students learn the fundamentals of data collection, warehousing and cleaning, as well as implementation.
  • Data Strategy and Architecture: Once data is collected, teams have to set up a data structure to make the data consumable across an organization.
  • Standard Metrics and Functionality: Focuses on reporting and how data is presented to functions such as marketing, product development, eCommerce and design.
  • Analysis Workspace Fundamentals: Provides students an opportunity to curate data, collaborate with others, produce new visualizations and uncover insights that advance business objectives.

https://experienceleague.adobe.com/landing/analytics-university/

« Older posts Newer posts »

© 2025 The Gilbane Advisor

Theme by Anders NorenUp ↑