SDL announced its machine translation (MT) technology is now available through the Reynen Court LLC platform, enabling legal firms and departments to provision and deploy SDL Machine Translation to securely translate any type of legal document or file. The Reynen Court platform combines a content-rich solution store and a control panel that simplifies the process for law firms and legal departments to source, evaluate, deploy, monitor and manage legal technology applications. Platform users can employ a multicloud technology strategy that includes on-premises data centers and virtual private clouds under the platform user’s control without compromising informational security, environmental stability, and infrastructure control. The platform also allows users to manage software subscriptions and evaluate usage and consumption metrics from across its tech stack in one place, permitting them to optimize technological investment through the deployment of legal technology. Reynen Court was established with support from a consortium of 19 leading global law firms. The latest version of SDL Machine Translation goes beyond automatic translation and integrates with multiple platforms to power digital customer experience, eDiscovery, due diligence, contract review, analytics, internal communications and collaboration.
Category: Computing & data (Page 77 of 92)
Computing and data is a broad category. Our coverage of computing is largely limited to software, and we are mostly focused on unstructured data, semi-structured data, or mixed data that includes structured data.
Topics include computing platforms, analytics, data science, data modeling, database technologies, machine learning / AI, Internet of Things (IoT), blockchain, augmented reality, bots, programming languages, natural language processing applications such as machine translation, and knowledge graphs.
Related categories: Semantic technologies, Web technologies & information standards, and Internet and platforms.
Webiny announced the availability of Webiny Serverless Headless CMS (beta). When you look at the headless CMS market there are several options you can choose from, but none of the options are both serverless and open-source at the same time. Half of them run on “traditional” infrastructures, like virtual machines, and the other half is made of standard SaaS products. The goal was to build something that scales to handle huge amounts of traffic out of the box, no matter how spikey, with a solution that is customizable and has zero overhead when it comes to managing infrastructure. Today, this is only achievable with serverless infrastructure.
Included in the package:
- Content modeling interface — You can not only model your content, but also build the interface for how the input forms will look like to your content editors. Place form inputs inside a grid layout and split it into multiple columns and rows.
- Content localization — A simple and intuitive way to input and serve content in multiple languages. We keep the interface for editors clean and easy to use, no matter if you have 1 language or 20 languages.
- GraphQL API — The API is at the core of every headless CMS. If you get the API wrong, the whole product has a poor experience. This is why we spent a significant part of our effort on ensuring developers have a great experience using our API. The API also comes with a built-in GraphQL playground, so it’s super simple to inspect your schema and test your queries.
- Environments and aliases — With a single click copy your existing data into a new environment. Modify it and update it without affecting your production site. Finally, remap the alias to switch the new environment into production. With this approach, you get instant rollback feature as well as there is no need to update and redeploy your code to any of your devices when you make changes.
- Customizable and extendable platform with a microservices architecture — Having a headless CMS is great, but what if it only gets you halfway? What if you need to build custom code, add logic with specific rules that are outside the scope of the headless CMS. Using the Serverless Web Development Framework you can build any type of logic your project requires and deploy it alongside the headless CMS as a separate microservice.
Xero, the global small business platform, has announced the release of new search functionality on Xero’s app marketplace. With more than 800 third party apps that connect to the platform, Xero’s app marketplace now serves up suggestions based on a small business’ profile when they are logged into Xero and an improved search toolbar presents popular apps and quick links, providing a more personalized, intuitive, and efficient experience. The new search functionality is powered by Coveo’s recommendations engine, using machine learning to serve up app suggestions based on a small business’ profile when they are logged into Xero.
Newgen Software, a global provider of low code digital automation platform for managing content, processes, and communication, announced it has launched an enhanced version of its document classification service for enabling the high-volume document-handling environment. Intelligent Document Classifier 1.0 allows users to gain hidden insights by classifying documents, based on structural features and/or textual features. It uses machine learning (ML) and artificial intelligence (AI), to enable layout- and content-based document classification. Organizations can leverage the solution to automatically classify various documents such as sales/purchase orders, enrollment and claim forms, legal documents, mailroom documents, contracts, correspondences, and others. This helps ensure important information is available thereby reducing risks and costs associated with manual document management.
Key features include:
- Image Classification – Allows users to automatically classify images using neural networks and deep learning algorithms based on structural features
- Content Classification – Enables document classification based on content, in the absence of structural features
- Trainable Machine Learning – Auto-learns definitions and features of a document class and creates a trained model
- Admin Dashboard – Generates analytics reports for a 360-degree view of the process
- Integration Capabilities – Facilitates easy integration with core business applications, content management platforms, and document capture applications
Amazon Web Services and Slack Technologies announced a new multi-year agreement to deliver solutions for enterprise workforce collaboration. Slack and AWS will strategically partner to help distributed development teams communicate and become more efficient and agile in managing their AWS resources from inside Slack. Slack will migrate its Slack Calls capability for all voice and video calling to Amazon Chime, AWS’s communications service that lets users meet, chat, and place business calls. Slack is also leveraging AWS’s global infrastructure to support enterprise customers’ adoption of its platform and to offer them data residency – the ability to choose which country or region their data is stored at rest in while fulfilling compliance requirements. Slack continues to rely on AWS as its preferred cloud provider and will use a range of AWS services, including storage, compute, database, security, analytics, and machine learning, to develop new collaboration features. Additionally, AWS has agreed to use Slack to simplify the way teams at AWS communicate and work together.
Slack and AWS will also extend product integration and deepen interoperability. These integrations include:
- Amazon Chime infrastructure with Slack Calls
- AWS Key Management Service with Slack Enterprise Key Management (EKM)
- AWS Chatbot integration with Slack
- Amazon AppFlow integration with Slack
Atlassian announced 12 new collaboration features, automations and integrations in order to help developers take their time back and ship better code, faster. There are too many disconnected tools, manual processes, and constantly changing collaboration practices are blocking developers from reaching the full promise of DevOps. Developers need less context switching. Fewer meetings. Fewer pings from IT about security incidents. Just more time to code and deliver value to customers. The goal is to help developers focus on their code with connected development, IT operations, and business teams with automation that spans Atlassian products and third-party tools. With Jira as the backbone and ultimate source of truth, Atlassian unifies all of DevOps work to reduce collaboration overload. There are deep integrations between Jira Software Cloud and Bitbucket Cloud, GitHub, and GitLab so that issue tracking and project updates happen right where you code, automatically. No need to go back to Jira. And your project manager won’t have to ping you for updates and interrupt your coding flow, because your project board will automatically update based on your work in Bitbucket, GitHub, or GitLab.
SDL announced it entered into a technical partnership with DRUID, specialists in conversational AI, to launch multi-lingual virtual assistants for enterprise organizations that enable real-time communication through chatbots. By integrating SDL Machine Translation with DRUID virtual assistants, companies will be able to conduct chatbot conversations in different languages with employees, customers, partners and suppliers. The solution offers a real-time “interpreter mode” function, which can translate conversations along with “live chat” which can translate into multiple languages.
Chatbots are commonly configured to undergo complicated question-and-answering activities in different languages, but language-specific customization can be complex, time-consuming and costly. The issue becomes even more complex when a chatbot is connected to various data sources (ERP, CRM, BI, HRIS, or other types of business applications). With SDL Machine Translation, chatbots can converse in multiple languages without the need to translate data sources or conversational flows.
SDL Machine Translation provides the neural machine (NMT2.0) foundationand the combined solution includes the ability to control brand voice with a brand-specific terminology dictionary that contains company-specific product names and unique terminology. This is machine learning solution uses anonymized chat logs for continuous language model improvement.
Ben Thompson has a member-only post on Stratechery that is worth a read if you’re one of his subscribers. Steve Jobs and OpenDoc, Fluid Framework, Microsoft Lists.
An article on The Verge and quotes from Microsoft’s Jared Spataro about Fluid reminded Thompson of OpenDoc and he begins his own thoughts on Fluid with a bit of history on Steve Jobs decision to kill OpenDoc in 1997. Thompson suggests the reason was that a combination of Microsoft’s dominant marketshare, and
that the application model was simply a much better approach for the personal computer era. Given the lack of computing power and lack of connectivity, it made much more sense to have compatible documents made by common applications than to try and create common documents with compatible components — at least with the level of complexity implicit in OpenDoc.
Thanks to Thompson for giving me an excuse to indulge in a little history of my own, which largely supports his view. Below is what I shared with him. The history is fun, but the new Fluid Framework is also worth a closer look.
———————-
Fluid also reminded me of the competing OpenDoc and OLE approaches in the early 90s. To supplement your history…
At the first Documation conference in February 2004 1994 I moderated a session that included Apple Chief Scientist Larry Tesler, and Tony Williams, Microsoft Software Architect and Co-creator of COM. I had asked each of them to discuss requirements for and their approaches to building a “compound document architecture”. OpenDoc was naturally appealing to me (and many of my subscribers) at the time, but Tony made a strong case for OLE. Tony’s argument for OLE was technical but he also addressed the issue from a business point of view, and argued that OpenDoc was too much of a radical change for both developers and end users. While this was more of an issue for Microsoft with their large developer community and installed base, OpenDoc was radical, and I expect that was the reason OpenDoc languished at Apple and for Jobs’ ultimate rejection.
Below is an excerpt from my report about the session. The complete report and conference program and be found at the link above.
Technology Trends — Document Computing
On Wednesday the general session was divided into two sections. One covered new technologies being developed to enhance document computing and document management. The other presented senior managers from large corporations who described their own document management needs.
Your editor opened the technology session by describing three components of current document management systems, each of which presage future developments. Objects — whether in terms of object-oriented databases, object-oriented programming, or multimedia document component “information objects” — play a big role in making systems more flexible and capable of dealing with complexity. Building an architecture to manage and share distributed objects, and to link and assemble them into document form are requirements of many enterprise-wide document management solutions. Finally, the document metaphor is increasingly seen as the most effective and friendly way to interface not only with document management systems, but with information in general.
Today, these capabilities are built either at the application level, or as “middleware”. For many reasons (e.g., application interoperability, performance, and ease of application development), it would help instead to have support for these capabilities at the operating environment level.
Previous attempts at compound document architectures to provide such an environment have failed. But this is clearly something we need, and eventually will get. Whoever defines and builds such an architecture will be in a powerful position to dominate the IT market. We can expect fierce battles among the platform and architecture vendors to control this architecture . The two leading candidates today are Microsoft’s OLE, and the Component Integration Lab consortium’s OpenDoc (based on Apple technology).
Larry Tessler from Apple described the “Information Tidal Wave” (his alternative to “superhighway”) coming with the growth of electronic multimedia documents, and with the rapid building of electronic document repositories. IS managers will face severe new problems arising from the need to manage these repositories. Larry positioned OpenDoc as a core technology for supporting the management and assembly of these new kinds of documents.
Microsoft’s Tony Williams focused on user requirements for a compound document architecture. Compound documents should be thought of as “compound views” of information, and documents are just one form of information, and thus need to be handled as part of an information architecture. Information architectures in turn need to be able to manage many different types of multimedia data for both document and data applications.
A standard “containment model” is needed, Williams said, to allow applications to share and organize information objects. Previous attempts at standard compound document architectures, e.g., ODA (Office, or Open Document Architecture) failed because they attempted to define a too restrictive representation. Such systems also need to handle ad hoc information (for example, that created with a personal information manager) as well as structured documents.
Tony emphasized the need to protect both user investments in information and developer investments in applications. While a compound document architecture environment is a requirement of any new operating environment, there must be an evolutionary path provided — a compound document architecture that forces a radical change too quickly will not gain acceptance. Tony positioned OLE as the technology that meets these requirements.
When asked, both Tony and Larry Tessler claimed that OpenDoc and OLE should work together and described generally — each in terms of the architecture they were promoting — how that could happen. However, this is definitely an area where there needs to be continued and aggressive vigilance on the part of corporate users to ensure that operating environment interoperability results. It would certainly not be wise — at least not yet — to assume that one of these approaches will become dominant.