Curated for content, computing, data, information, and digital experience professionals

Category: Computing & data (Page 37 of 91)

Computing and data is a broad category. Our coverage of computing is largely limited to software, and we are mostly focused on unstructured data, semi-structured data, or mixed data that includes structured data.

Topics include computing platforms, analytics, data science, data modeling, database technologies, machine learning / AI, Internet of Things (IoT), blockchain, augmented reality, bots, programming languages, natural language processing applications such as machine translation, and knowledge graphs.

Related categories: Semantic technologies, Web technologies & information standards, and Internet and platforms.

Apollo GraphQL & MongoDB create stack for app developers

Apollo GraphQL and MongoDB, Inc., announced a technology partnership that helps app developers build richer experiences faster, and reduce technical debt with a graph-native data layer. The partnership makes it easier for developers and teams to directly connect any supergraph powered by Apollo to a MongoDB Atlas database. Together, an Apollo supergraph and MongoDB Atlas create a composable and scalable GraphQL data layer. It provides developers with everything they need to efficiently use GraphQL:

  • A unified API, so app developers can rapidly create new experiences
  • A modular API layer, so each team can independently own their slice of the graph
  • A seamless, high-performance, flexible data layer that scales alongside API consumption

MongoDB’s flexible database paired with the GraphQL query language allows developers to work with the database in the language of their choice with a standardized spec that has large community adoption. With the nested document model, developers can model and query data intuitively without the complexity of mapping GraphQL to relational data and defining relationships across tables. When used with MongoDB Atlas’s multi-region and multi-cloud capabilities, an Apollo supergraph gives its developers a GraphQL layer to create end-user experiences for their apps and services.

https://www.apollographql.comhttps://www.mongodb.com

Botminds Document AI now available in Microsoft Azure Marketplace

Botminds AI announced the availability of Botminds Document AI platform in the Microsoft Azure Marketplace, an online store providing applications, and services for use on Azure. Botminds’ customers can now take advantage of the Azure cloud platform, with streamlined deployment and management.

Botminds is an integrated platform for Document Understanding/Intelligent Document Processing (IDP). Botminds provides AI-powered solutions that read and understand documents, a search engine to query any document, enriched document analytics, and end-to-end document-based process automation. Custom AI models, help organizations transform data into intelligent actionable insights in a matter of weeks.

The Azure Marketplace is an online market for buying and selling cloud solutions certified to run on Azure. The Azure Marketplace helps connect companies seeking cloud-based solutions with partners who have developed solutions that are ready to use.

https://www.botminds.ai

Data Harmony suite Recommender released

Access Innovations, Inc., provider of Data Harmony software solutions, announced the release of their new Recommender as part of the Data Harmony Suite. Recommender is now available to all Data Harmony clients using versions 3.16 or higher.

Recommender uses the semantic fingerprint of an article, its subject metadata tagging, matching to other articles and content within the database. When the searcher finds an article they like, the Recommender automatically displays other items with the same semantic fingerprint nearby on the search interface. This allows immediate display of highly relevant content to the search without scrolling and frustration in trying to find similar items. It also allows for display of other relevant content such as conference papers, ads, books, meetings, expert profiles, and so forth.

This is not based on personalization profiles or purchasing history. By using the metadata weighting and other algorithms it provides only items relevant to the current query faster search and the surfacing of more related information to the user.

For those interested in using Recommender there are two prerequisites: 1) the content needs to be indexed or tagged using a controlled vocabulary like a thesaurus or taxonomy, and 2) the search interface needs to be able to accommodate the API call to the tagged data and subsequent display of the results.

https://www.accessinn.com

Travel Intrepid

The second post from Girish Altekar on Intrepid technology and applications*. The introductory post is here and all our re-published intrepid posts are here.


Let’s look at the Travel Intrepid. 

When attempting to book a travel plan that is the cheapest or has the most convenient schedule or is the most flexible, we typically use multiple travel sites and go through multiple iterations with dates, connecting cities etc. to arrive at an optimum booking. In each of the iterations, reentry of travel dates, preferences, and other pieces of data is required. Even after an acceptable itinerary is created, travel info, passport info, Known Traveler Numbers etc. for all travelers in your family needs to be reentered again and again for every trip you plan. 

Try the Travel Profile Intrepid (at https://lnkd.in/gPVvfci2), a structured version of your family’s travel profile. This profile allows you to capture your family’s relatively static travel data and preferences, and you can create a new one for a new trip by changing only the trip data (dates, destinations etc.). Once a trip/travel profile is created, you can use it to search for itineraries many times at Intrepid compatible sites by simply tweaking the new Travel Intrepid. When an itinerary is created, all the data needed to book the itinerary is already with the travel provider.  

Create your own Travel Profile Intrepid at https://lnkd.in/gPVvfci2 and upload it back to see its contents. For testing, you can create it with as much or as little, real, or fake, data. Only your correct email is required to deliver your Travel Profile Intrepid to you. All data entered at intrep-id.com is purged when the Intrepid is transmitted – none of it is stored. Try it out.

We are interested in your feedback. https://lnkd.in/gm3fAPh

Previous Intrepid related posts available at intrep-id.com.

Introduction – https://lnkd.in/g7VyFyya

Future Topics

Travel Intrepid – this
Resume Intrepid
Other Intrepids 
Personal Health Profile Intrepid
Data Preferences Intrepid
Support / Receipt Intrepid
An Invoicing / Payment application
State Government Applications 
Summing it up And Current Status

https://intrep-id.com

*Disclosure: I am an advisor to the Intrepid team.

Introduction to Intrepids

This is a guest post from friend and colleague Girish Altekar, who has been working on this idea and technology for some time. I have been involved as an advisor for a couple of years, and will be republishing the series of posts he refers to below. Check it out…


Be Intrepid on the web

Over the next few days, we take the wraps off a new way to manage personal data. We will showcase applications that empower individuals to manage and reuse their data efficiently while enhancing data privacy, increasing data control, and facilitating genuine competitiveness in marketplaces they are interested in. Our Intrepids technology is a liberating technology for data owners as, untethered from call centers, complex web sites, and repeated data entry, data owners are free to go about their lives feeling secure about their personal data and knowing that their Intrepid driven requests are being honored accurately.

Businesses benefit also. The authentication and accuracy that is built into Intrepid driven data transfer empowers business applications to rely on actual dependable user data, instead of screen scraping and heuristics. Dependable data reduces the need for data verification and cleansing and drives the creation of new innovative applications that reduce business costs and improve customer experience. 

A key difference between Intrepids and other approaches is that user data encapsulated in Intrepids stays with users, not with intrep-id.com or related servers. Intrepid servers facilitate the data transfer, but all user data is purged when the requested user transaction is completed.

These posts are intended to generate two kinds of interest.

  1. We want to see if there is user interest in a privacy mechanism such as Intrepids. Please feel free to try out any application and give us your feedback if you think Intrepids, if/when widely adopted, may be useful to you.
  2. We are also looking for strategic business partners who may be interested in exploring the use of Intrepids for their businesses and customers. Remember that any repetitive or burdensome data transfer can be eliminated by using Intrepids.

We would be honored if you felt like passing these posts on to colleagues and friends who may have an interest. 

In the next post we start with the first Intrepid example.

Future Topics

Previous Intrepid related posts available at intrep-id.com.

  • Introduction – this
  • Travel Intrepid 
  • Resume Intrepid
  • Other Intrepids 
  • Personal Health Profile Intrepid
  • Data Preferences Intrepid
  • Support / Receipt Intrepid
  • An Invoicing / Payment application
  • State Government Applications 
  • Summing it up And Current Status

https://intrep-id.com

Kentico to focus on CMS & DXP

Kontent by Kentico announced that they raised an investment of $40 million from Expedition Growth Capital and became a standalone enterprise. Now Kontent.ai, which originally started in 2015 as an internal startup within Kentico, will operate as a separate company, focused on targeting high-level enterprise organizations. This allows Kentico Xperience to return to its roots and refresh its name to Kentico. Petr Palas, the founder of Kentico, is now Chairman of the Board for both companies and the newly formed board has appointed Dominik Pinter, as Chief Executive Officer of Kentico.

Kentico started with a content management system (CMS) in 2004 and have since created two products: a digital experience platform (DXP) with content management, digital marketing, and commerce capabilities, and a headless CMS. In May 2020, the company split into two divisions; Kentico Xperience (DXP) and Kontent by Kentico (headless CMS). 

The investment into the Kontent by Kentico division will be redirected straight into the DXP. With heavy investment in product development, the plan is to hire at least 60 more people over the next 12 months to join the 160+ global team. 

https://www.kentico.com

Netlify announces investments for the Jamstack Innovation Fund

Netlify, a platform for modern web development, announced the first cohort of the Jamstack Innovation Fund, created by Netlify to support the early-stage companies that are driving forward the modern web by arming developer teams with Jamstack-based tooling and practices.

Jamstack is an architectural approach that decouples the web experience from data and business logic, improving flexibility, scalability, performance and maintainability. Each of the startups Netlify has invested in offers a unique technology that adds to the best development experience for the web. They include ChiselStrike, a prototype-to-production data platform; Clerk, the first authentication service purpose-built for Jamstack; Clutch, a visual editor for Jamstack solutions; Convex, a global state management platform; Deno, a modern runtime for JavaScript and TypeScript; Everfund, a developer-first nonprofit tool to build custom fundraising systems; NuxtLabs, making web development intuitive with NuxtJS, an open source framework for Vue.js; Snaplet, a tool for copying Postgres databases; TakeShape, a GraphQL API mesh; and Tigris Data, a zero-ops backend for web and mobile apps.

The Fund has a goal of investing $10 million in the Jamstack ecosystem. In addition to a $100,000 investment, Netlify provides a free startup program. Netlify is accepting rolling submissions to the Jamstack Innovation Fund.

https://www.netlify.com/jamstack-fund/

IBM Research open-sources toolkit for Deep Search

IBM released an open-sourced part of the IBM Deep Search Experience in a new toolkit, Deep Search for Scientific Discovery (DS4SD), for scientific research and businesses with the goal of spurring on the rate of scientific discovery.

To help achieve this goal, we’re now publicly releasing a key component of the Deep Search Experience, our automatic document conversion service. It allows users to upload documents in an interactive fashion to inspect a document’s conversion quality. DS4SD has a simple drag-and-drop interface, making it very easy for non-experts to use. We’re also releasing deepsearch-toolkit, a Python package, where users can programmatically upload and convert documents in bulk.

Deep Search uses AI to collect, convert, curate, and ultimately search huge document collections for information that is too specific for common search tools to handle. It collects data from public, private, structured, and unstructured sources and leverages state-of-the-art AI methods to convert PDF documents into easily decipherable JSON format with a uniform schema that is ideal for today’s data scientists. It then applies dedicated natural language processing and computer vision machine-learning algorithms on these documents and ultimately creates searchable knowledge graphs.

https://research.ibm.com/blog/deep-search-toolkit

« Older posts Newer posts »

© 2025 The Gilbane Advisor

Theme by Anders NorenUp ↑