Curated for content, computing, and digital experience professionals

Category: Computing & data (Page 66 of 80)

Computing and data is a broad category. Our coverage of computing is largely limited to software, and we are mostly focused on unstructured data, semi-structured data, or mixed data that includes structured data.

Topics include computing platforms, analytics, data science, data modeling, database technologies, machine learning / AI, Internet of Things (IoT), blockchain, augmented reality, bots, programming languages, natural language processing applications such as machine translation, and knowledge graphs.

Related categories: Semantic technologies, Web technologies & information standards, and Internet and platforms.

Steve Jobs, OpenDoc, and Fluid

Ben Thompson has a member-only post on Stratechery that is worth a read if you’re one of his subscribers. Steve Jobs and OpenDoc, Fluid Framework, Microsoft Lists.

An article on The Verge and quotes from Microsoft’s Jared Spataro about Fluid reminded Thompson of OpenDoc and he begins his own thoughts on Fluid with a bit of history on Steve Jobs decision to kill OpenDoc in 1997. Thompson suggests the reason was that a combination of Microsoft’s dominant marketshare, and

that the application model was simply a much better approach for the personal computer era. Given the lack of computing power and lack of connectivity, it made much more sense to have compatible documents made by common applications than to try and create common documents with compatible components — at least with the level of complexity implicit in OpenDoc.

Thanks to Thompson for giving me an excuse to indulge in a little history of my own, which largely supports his view. Below is what I shared with him. The history is fun, but the new Fluid Framework is also worth a closer look. 

———————-

Fluid also reminded me of the competing OpenDoc and OLE approaches in the early 90s. To supplement your history…

At the first Documation conference in February 2004 1994 I moderated a session that included Apple Chief Scientist Larry Tesler, and Tony Williams, Microsoft Software Architect and Co-creator of COM. I had asked each of them to discuss requirements for and their approaches to building a “compound document architecture”. OpenDoc was naturally appealing to me (and many of my subscribers) at the time, but Tony made a strong case for OLE. Tony’s argument for OLE was technical but he also addressed the issue from a business point of view, and argued that OpenDoc was too much of a radical change for both developers and end users. While this was more of an issue for Microsoft with their large developer community and installed base, OpenDoc was radical, and I expect that was the reason OpenDoc languished at Apple and for Jobs’ ultimate rejection.

Below is an excerpt from my report about the session. The complete report and conference program and be found at the link above.

Technology Trends — Document Computing

On Wednesday the general session was divided into two sections. One covered new technologies being developed to enhance document computing and document management. The other presented senior managers from large corporations who described their own document management needs.

Your editor opened the technology session by describing three components of current document management systems, each of which presage future developments. Objects — whether in terms of object-oriented databases, object-oriented programming, or multimedia document component “information objects” — play a big role in making systems more flexible and capable of dealing with complexity. Building an architecture to manage and share distributed objects, and to link and assemble them into document form are requirements of many enterprise-wide document management solutions. Finally, the document metaphor is increasingly seen as the most effective and friendly way to interface not only with document management systems, but with information in general.

Today, these capabilities are built either at the application level, or as “middleware”. For many reasons (e.g., application interoperability, performance, and ease of application development), it would help instead to have support for these capabilities at the operating environment level.

Previous attempts at compound document architectures to provide such an environment have failed. But this is clearly something we need, and eventually will get. Whoever defines and builds such an architecture will be in a powerful position to dominate the IT market. We can expect fierce battles among the platform and architecture vendors to control this architecture . The two leading candidates today are Microsoft’s OLE, and the Component Integration Lab consortium’s OpenDoc (based on Apple technology).

Larry Tessler from Apple described the “Information Tidal Wave” (his alternative to “superhighway”) coming with the growth of electronic multimedia documents, and with the rapid building of electronic document repositories. IS managers will face severe new problems arising from the need to manage these repositories. Larry positioned OpenDoc as a core technology for supporting the management and assembly of these new kinds of documents.

Microsoft’s Tony Williams focused on user requirements for a compound document architecture. Compound documents should be thought of as “compound views” of information, and documents are just one form of information, and thus need to be handled as part of an information architecture. Information architectures in turn need to be able to manage many different types of multimedia data for both document and data applications.

A standard “containment model” is needed, Williams said, to allow applications to share and organize information objects. Previous attempts at standard compound document architectures, e.g., ODA (Office, or Open Document Architecture) failed because they attempted to define a too restrictive representation. Such systems also need to handle ad hoc information (for example, that created with a personal information manager) as well as structured documents.

Tony emphasized the need to protect both user investments in information and developer investments in applications. While a compound document architecture environment is a requirement of any new operating environment, there must be an evolutionary path provided — a compound document architecture that forces a radical change too quickly will not gain acceptance. Tony positioned OLE as the technology that meets these requirements.

When asked, both Tony and Larry Tessler claimed that OpenDoc and OLE should work together and described generally — each in terms of the architecture they were promoting — how that could happen. However, this is definitely an area where there needs to be continued and aggressive vigilance on the part of corporate users to ensure that operating environment interoperability results. It would certainly not be wise — at least not yet — to assume that one of these approaches will become dominant.

Natural language understanding

Natural language understanding is a subtopic of natural language processing in artificial intelligence that deals with machine reading comprehension.

Lucidworks announces advanced linguistics package

Lucidworks announced the Advanced Linguistics Package for Lucidworks Fusion to power personalized search for users in Asian, European, and Middle Eastern markets. Lucidworks now embeds text analytics from Basis Technology, provider of AI for natural language processing. According to the companies, building, testing, and maintaining the many algorithms and models required to properly support each language is challenging and expensive. Asian, Middle Eastern, and certain European languages require additional processes to handle unique linguistic phenomena, such as lack of whitespace, compound words, and multiple forms of the same word. The combination of Basis with the AI-powered search platform of Lucidworks Fusion is expected to provide accuracy and performance enhancements in information retrieval for the digital experience. Lucidworks’ Advanced Linguistics Package provides language processing in more than 30 languages and advanced entity extraction in 21 languages. By accurately analyzing the text, in the language it was written, Rosette helps the Lucidworks Fusion platform deliver the right answers to every user, regardless of where they work or what language they use.

https://lucidworks.comhttps://www.basistech.com

Netlify announced general availability of Netlify Build Plugins

Netlify, creator of the Jamstack web architecture, announced the general availability of Netlify Build Plugins — tools to easily customize and automate CI/CD workflows for Jamstack websites and web applications. Development teams can choose from a catalog of integrations created by developers at Netlify and in the community that can be installed directly from the Netlify UI. They also have the flexibility to build their own plugins using a straightforward API. New capabilities enabled by Build Plugins include the ability to run an end-to-end Cypress test, audit for accessibility with Pa11y, and more. Previously, developers had to set up changes or integrations to the build process from scratch, configuring every command to run at build, downloading and validating every dependency, and writing the code to make it all work. Now any developer can simply choose an available Build Plugin, click “install” from the Netlify UI, and then select sites where the plugin should be enabled. Build Plugins are available for free to use with every Netlify plan.

https://www.netlify.com

8th Wall launches Face Effects tool for AR facial animations

8th Wall is launching Face Effects, a new cloud tool that enables developers to create facial effects that wrap around someone’s face using augmented reality technology. The face filter developer tools are based on WebAR, which enables AR experiences to be accessed via a web browser instead of an app. 8th Wall Face Effects is designed to give developers and brands control to create face filters that are interactive, real-time, and that live on their own websites. Beyond WebAR, 8th Wall Face Effects can also be used across all devices (iOS/Android and desktops using a webcam) and benefit from no app required. You just click a link to experience it. Developers can choose the asset types, file sizes, and content to maximize the value for their audience.

Fans could connect multiple users together to create a shared shopping experience, and integrate with developers’ preferred analytics, customer relationship management system, and payment systems in virtual try-on products. Developers can simply scan a QR code to open up a cloud editor that adds a 3D object to your face. The edges of virtual sunglasses can stop at the edge of your face because an occluder prevents it from going right through your face. You can put virtual tattoos on your face to see what they look like before you make them permanent. With 8th Wall Face Effects, developers can anchor 3D objects to face attachment points, render face mesh with face components with textures and shaders, and design custom effects. Similar to 8th Wall’s existing World Effects and Image Target AR, Face Effects supports development with web frameworks such as A-Frame and Three.js.  New developers can sign up for a 14-day free trial of the 8th Wall platform. Existing developers can log in and get started using the Face Effects project templates.

https://www.8thwall.com, h/t: VentureBeat

Franz Inc announces AllegroGraph v7

Franz Inc., developer of Artificial Intelligence (AI) and supplier of Semantic Graph Database technology for Knowledge Graph Solutions, announced AllegroGraph 7, a solution that allows infinite data integration through a patented approach unifying all data and siloed knowledge into an Entity-Event Knowledge Graph solution that can support massive big data analytics. AllegroGraph 7 utilizes federated sharding capabilities that drive 360-degree insights and enable complex reasoning across a distributed Knowledge Graph. Hidden connections in data are revealed to AllegroGraph 7 users through a new browser-based version of Gruff, an advanced visualization and graphical query builder.

To support ubiquitous AI, a Knowledge Graph system will have to fuse and integrate data, not just in representation, but in context (ontologies, metadata, domain knowledge, terminology systems), and time (temporal relationships between components of data). The rich functional and contextual integration of multi-modal, predictive modeling and artificial intelligence is what distinguishes AllegroGraph 7 as a modern, scalable, enterprise analytic platform. AllegroGraph 7 is a temporal knowledge graph technology that encapsulates a novel entity-event model natively integrated with domain ontologies and metadata, and dynamic ways of setting the analytics lens on all entities in the system (patient, person, devices, transactions, events, and operations) as prime objects that can be the focus of an analytic (AI, ML, DL) process.

https://allegrograph.com, https://franz.com

Automattic invests in open decentralized comms ecosystem Matrix

Automattic, the open source force behind WordPress, WooCommerce, Longreads, Simplenote and Tumblr, has made a $4.6M strategic investment into New Vector — the creators of an open, decentralized communications standard called Matrix. New Vector also developed a Slack rival (Riot) which runs on Matrix. Matrix is an open source project that publishes the Matrix open standard for secure, decentralised, real-time communication, and its Apache licensed  reference implementations.

New Vector’s decentralized tech powers instant messaging for a number of government users, including France — which forked Riot to launch a messaging app last year (Tchap) — and Germany, which just announced its armed forces will be adopting Matrix as the backbone for all internal comms; as well as for KDE, Mozilla, RedHat and Wikimedia, and others.

https://vector.imhttps://matrix.org, h/t: Techcrunch

 

Luminoso announces enhancements to open data semantic network

Luminoso, who turn unstructured text data into business-critical insights, announced the newest features of ConceptNet, an open data semantic network whose development is led by Luminoso Chief Science Officer Robyn Speer. ConceptNet originated from MIT Media Lab’s Open Mind Common Sense project more than two decades ago, and the semantic network is now used in AI applications around the world. ConceptNet is cited in more than 700 AI papers in Google Scholar, and its API is queried over 500,000 times per day from more than 1,000 unique IPs. Luminoso has incorporated ConceptNet into its proprietary natural language understanding technology, QuickLearn 2.0. ConceptNet 5.8 features:

Continuous deployment: ConceptNet is now set up with continuous integration using Jenkins and deployment using AWS Terraform, which will make it faster to deploy new versions of the semantic network and easier for others to set up mirrors of the API.

Additional curation of crowd-sourced data: ConceptNet’s developers have filtered entries from Wiktionary that were introducing hateful terminology to ConceptNet without its context. This is part of their ongoing effort to prevent human biases and prejudices from being built into language models. ConceptNet 5.8 has also updated its Wiktionary parser so that it can handle updated versions of the French and German-language Wiktionary projects.

HTTPS support: Developers can now reach ConceptNet’s website and API over HTTPS, improving data transfer security for applications using ConceptNet.

http://blog.conceptnet.io/posts/2020/conceptnet-58/, https://luminoso.com/how-it-works

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑