Curated for content, computing, and digital experience professionals

Category: Computing & data (Page 26 of 80)

Computing and data is a broad category. Our coverage of computing is largely limited to software, and we are mostly focused on unstructured data, semi-structured data, or mixed data that includes structured data.

Topics include computing platforms, analytics, data science, data modeling, database technologies, machine learning / AI, Internet of Things (IoT), blockchain, augmented reality, bots, programming languages, natural language processing applications such as machine translation, and knowledge graphs.

Related categories: Semantic technologies, Web technologies & information standards, and Internet and platforms.

Introduction to Intrepids

This is a guest post from friend and colleague Girish Altekar, who has been working on this idea and technology for some time. I have been involved as an advisor for a couple of years, and will be republishing the series of posts he refers to below. Check it out…


Be Intrepid on the web

Over the next few days, we take the wraps off a new way to manage personal data. We will showcase applications that empower individuals to manage and reuse their data efficiently while enhancing data privacy, increasing data control, and facilitating genuine competitiveness in marketplaces they are interested in. Our Intrepids technology is a liberating technology for data owners as, untethered from call centers, complex web sites, and repeated data entry, data owners are free to go about their lives feeling secure about their personal data and knowing that their Intrepid driven requests are being honored accurately.

Businesses benefit also. The authentication and accuracy that is built into Intrepid driven data transfer empowers business applications to rely on actual dependable user data, instead of screen scraping and heuristics. Dependable data reduces the need for data verification and cleansing and drives the creation of new innovative applications that reduce business costs and improve customer experience. 

A key difference between Intrepids and other approaches is that user data encapsulated in Intrepids stays with users, not with intrep-id.com or related servers. Intrepid servers facilitate the data transfer, but all user data is purged when the requested user transaction is completed.

These posts are intended to generate two kinds of interest.

  1. We want to see if there is user interest in a privacy mechanism such as Intrepids. Please feel free to try out any application and give us your feedback if you think Intrepids, if/when widely adopted, may be useful to you.
  2. We are also looking for strategic business partners who may be interested in exploring the use of Intrepids for their businesses and customers. Remember that any repetitive or burdensome data transfer can be eliminated by using Intrepids.

We would be honored if you felt like passing these posts on to colleagues and friends who may have an interest. 

In the next post we start with the first Intrepid example.

Future Topics

Previous Intrepid related posts available at intrep-id.com.

  • Introduction – this
  • Travel Intrepid 
  • Resume Intrepid
  • Other Intrepids 
  • Personal Health Profile Intrepid
  • Data Preferences Intrepid
  • Support / Receipt Intrepid
  • An Invoicing / Payment application
  • State Government Applications 
  • Summing it up And Current Status

https://intrep-id.com

Kentico to focus on CMS & DXP

Kontent by Kentico announced that they raised an investment of $40 million from Expedition Growth Capital and became a standalone enterprise. Now Kontent.ai, which originally started in 2015 as an internal startup within Kentico, will operate as a separate company, focused on targeting high-level enterprise organizations. This allows Kentico Xperience to return to its roots and refresh its name to Kentico. Petr Palas, the founder of Kentico, is now Chairman of the Board for both companies and the newly formed board has appointed Dominik Pinter, as Chief Executive Officer of Kentico.

Kentico started with a content management system (CMS) in 2004 and have since created two products: a digital experience platform (DXP) with content management, digital marketing, and commerce capabilities, and a headless CMS. In May 2020, the company split into two divisions; Kentico Xperience (DXP) and Kontent by Kentico (headless CMS). 

The investment into the Kontent by Kentico division will be redirected straight into the DXP. With heavy investment in product development, the plan is to hire at least 60 more people over the next 12 months to join the 160+ global team. 

https://www.kentico.com

Netlify announces investments for the Jamstack Innovation Fund

Netlify, a platform for modern web development, announced the first cohort of the Jamstack Innovation Fund, created by Netlify to support the early-stage companies that are driving forward the modern web by arming developer teams with Jamstack-based tooling and practices.

Jamstack is an architectural approach that decouples the web experience from data and business logic, improving flexibility, scalability, performance and maintainability. Each of the startups Netlify has invested in offers a unique technology that adds to the best development experience for the web. They include ChiselStrike, a prototype-to-production data platform; Clerk, the first authentication service purpose-built for Jamstack; Clutch, a visual editor for Jamstack solutions; Convex, a global state management platform; Deno, a modern runtime for JavaScript and TypeScript; Everfund, a developer-first nonprofit tool to build custom fundraising systems; NuxtLabs, making web development intuitive with NuxtJS, an open source framework for Vue.js; Snaplet, a tool for copying Postgres databases; TakeShape, a GraphQL API mesh; and Tigris Data, a zero-ops backend for web and mobile apps.

The Fund has a goal of investing $10 million in the Jamstack ecosystem. In addition to a $100,000 investment, Netlify provides a free startup program. Netlify is accepting rolling submissions to the Jamstack Innovation Fund.

https://www.netlify.com/jamstack-fund/

IBM Research open-sources toolkit for Deep Search

IBM released an open-sourced part of the IBM Deep Search Experience in a new toolkit, Deep Search for Scientific Discovery (DS4SD), for scientific research and businesses with the goal of spurring on the rate of scientific discovery.

To help achieve this goal, we’re now publicly releasing a key component of the Deep Search Experience, our automatic document conversion service. It allows users to upload documents in an interactive fashion to inspect a document’s conversion quality. DS4SD has a simple drag-and-drop interface, making it very easy for non-experts to use. We’re also releasing deepsearch-toolkit, a Python package, where users can programmatically upload and convert documents in bulk.

Deep Search uses AI to collect, convert, curate, and ultimately search huge document collections for information that is too specific for common search tools to handle. It collects data from public, private, structured, and unstructured sources and leverages state-of-the-art AI methods to convert PDF documents into easily decipherable JSON format with a uniform schema that is ideal for today’s data scientists. It then applies dedicated natural language processing and computer vision machine-learning algorithms on these documents and ultimately creates searchable knowledge graphs.

https://research.ibm.com/blog/deep-search-toolkit

Ontotext releases GraphDB 10

GraphDB 10.0 is the first major release since GraphDB 9.0 was released in September 2019. It implements next generation, simpler and more reliable cluster architecture to deliver better resilience with reduced infrastructure costs. GraphDB 10 lowers the complexity of operations with better automation interfaces and a self-organized cluster for automated recovery. Deployment and packaging optimizations allow for effortless upgrades across the different editions of the engine, all the way from GraphDB Free to the Enterprise Edition. The improved full-text search (FTS) connectors of GraphDB 10 enable more comprehensive filtering as well as easier downstream data replication. Finally, parallelization of the path search algorithms brings massive improvement in graph analytics workloads through better exploitation of multi-core hardware.

Unlike previous versions, GraphDB 10 is packaged as a single distribution that can run in Free, Standard or Enterprise Edition modes depending on the currently set license. It requires zero development effort to pass from one edition to another. It is also possible to export a repository with an expired license so users are never locked out of their own data. Two major areas of improvement coming in 10.1 will be query performance optimization and availability on some of the major cloud platforms.

https://www.ontotext.com

DataStax’s Astra Streaming now supports for Kafka and RabbitMQ

DataStax announced the general availability of Astra Streaming, a managed messaging and event streaming service built on Apache Pulsar. Now featuring built-in API-level support for Kafka, RabbitMQ and Java Message Service (JMS), Astra Streaming makes it easier for enterprises to get real-time value from their data-in-motion. Capabilities include:

  • Mobilizes all data-in-motion An enterprise’s data-in-motion encompasses all data in platforms that provide streaming, queuing and pub/sub capabilities, Astra Streaming can address these use cases at the scale enterprises need.
  • Modernizes event-driven architectures: Seamlessly leverage existing messaging/pub sub apps and turn them into streaming apps with a drop-in replacement; easily modernize Kafka applications with zero rewrites
  • Runs across an entire IT estate: multi-cloud + on prem: Supports a unified event fabric that stretches across an enterprise’s data-in-motion spread across their entire data estate: on premises, in the cloud and at the edge.
  • Powers a real-time data ecosystem: Through a wide range of connectors, Astra Streaming is connected to an enterprise’s data ecosystem, enabling real-time data to flow instantly from data sources and applications to streaming analytics and machine learning systems. It’s also integrated with Astra DB, powering its CDC capabilities.

https://www.datastax.com/press-release/datastax-s-astra-streaming-goes-ga-with-new-built-in-support-for-kafka-and-rabbitmq

Acquia adds data subject deletion requests to Acquia CDP

Acquia announced new regulatory compliance features that help organizations using Acquia Customer Data Platform (CDP) to comply with data subject requests and privacy laws in general. Using a new self-service interface, organizations can rapidly process “Right to Erasure” (otherwise known as “Right to be Forgotten”) requests associated with regulations such as GDPR, CCPA, and more from their customers. The feature for legal and compliance workflows is to make it simple for organizations using Acquia CDP to process deletion requests from their own customers, ensuring that these requests are handled quickly.

Other recent self-service updates include secure credentials management for Acquia CDP out-of-the-box connectors. Organizations can now generate and manage their own credentials for pre-built connectors to external services such as Facebook or Google. In addition, they can set up new credentials for their own custom connectors. Both self-service credentials management and compliance features are meant to accelerate workflows within Acquia CDP, without having to wait for assistance from an Acquia customer support team member.

https://www.acquia.com

Tellius and Databricks partner to democratize data analysis

Tellius announced a partnership with Databricks to give joint customers the ability to run Tellius natural language search queries and automated insights directly on the Databricks Lakehouse Platform, powered by Delta Lake, without the need to move any data.

With Tellius, organizations can search and analyze their data to identify what is happening with natural language queries, understand why metrics are changing via AI-powered Insights, and determine next best actions with deep insights and AutoML. Connecting to Delta Lake on Databricks only takes a few clicks, and then users can perform a natural language search of their unaggregated structured and unstructured data to answer their own questions. They can drill down to get granular insights, leverage single-click AI analysis to uncover trends, key drivers, and anomalies in their data, and create predictive models via AutoML in Tellius. Answers and insights can be utilized to write back to source applications to operationalize insights. Faster data collaboration helps democratize data access across analytics teams with less worrying about performance or IT maintenance.

https://www.tellius.com/tellius-and-databricks-partner-to-deliver-ai-powered-decision-intelligence-for-the-data-lakehouse/

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑