GIGXR, Inc., a provider of extended reality (XR) learning systems for instructor-led teaching and training, announced the availability of its GIG Immersive Learning System for the Fall 2020 Northern Hemisphere academic year. The cloud-based System was created to enhance learning outcomes while simplifying complex, real-life teaching and training scenarios in medical and nursing schools, higher education, healthcare and hospitals. The GIG Immersive Learning System is available for demos and pre-order now, and includes three core components:
- Remote and Socially Distanced Learning: Enables teaching and training with students in a distributed classroom through extended reality. Students can be co-located, remote or safely socially distanced, and participate in sessions anywhere using 3D mixed reality immersive devices and mobile phones, tablets or laptops for a 2.5D experience.
- Mixed Reality Applications: GIGXR’s products HoloPatient and HoloHuman run on Microsoft’s HoloLens 2, placing the 3D digital world in a collaborative physical space for safe development of clinical skills and exploration into human pathologies and anatomies.
- Immersive Learning Platform: Cloud-based infrastructure that supports GIGXR’s mixed reality applications and remote learning capabilities with additional features such as visual login, instructor content creation, holographic content management, session planning, roles and rights, license management, security, privacy, and long-term data management.
Doctor Evidence (DRE) has updated their newly launched DOC Analytics (“Digital Outcome Conversion”) platform with network meta-analysis (NMA) capabilities. DOC Analytics provides immediate quantitative insights into the universe of medical information using artificial intelligence/machine learning (AI/ML) and natural language processing (NLP). With the addition of indirect treatment comparison and landscape analysis using NMA, DOC Analytics is a critical, daily-use tool for strategic functions in life sciences companies. DOC Analytics allows users to conduct analyses comprised of real-time results from clinical trials, real-world evidence (RWE), published literature, and any custom imported data to yield insightful direct meta-analysis, network-meta analysis, cohort analysis, or bespoke statistical outputs. Analyses are informed by AI/ML and can be made fit-to-purpose with filters for demographics, comorbidities, sub-populations, inclusion/exclusion selections, and other relevant parameters.
Cloudera announced the the premiere of Cloudera Data Platform Private Cloud (CDP Private Cloud). CDP Private Cloud is built for hybrid cloud, seamlessly connecting on-premises environments to public clouds with consistent, built-in security and governance. CDP Private Cloud, built on Red Hat OpenShift, is an enterprise data cloud that separates compute and storage for greater agility, ease of use, and more efficient use of private and public cloud infrastructure. Together, Red Hat OpenShift and CDP Private Cloud help create an essential hybrid, multi-cloud data architecture, enabling teams to rapidly onboard mission-critical applications and run them anywhere, without disrupting existing ones. Companies can now collect, enrich, report, serve and model enterprise data for any business use case in any cloud. CDP Private Cloud is in tech preview for select customers and is expected to be generally available later this summer.
OpenAI API announce they were releasing an API for accessing new AI models developed by OpenAI. Unlike most AI systems which are designed for one use-case, the API today provides a general-purpose “text in, text out” interface, allowing users to try it on virtually any English language task. You can now request access in order to integrate the API into your product, develop an entirely new application, or help us explore the strengths and limits of this technology. Given any text prompt, the API will return a text completion, attempting to match the pattern you gave it. You can “program” it by showing it just a few examples of what you’d like it to do; its success generally varies depending on how complex the task is. The API also allows you to hone performance on specific tasks by training on a dataset (small or large) of examples you provide, or by learning from human feedback provided by users or labelers. The API is designed to be both simple for anyone to use but also flexible enough to make machine learning teams more productive. In fact, many OpenAI teams are now using the API so that they can focus on machine learning research rather than distributed systems problems. Today the API runs models with weights from the GPT-3 family with many speed and throughput improvements.
The field’s pace of progress means that there are frequently surprising new applications of AI, both positive and negative. We will terminate API access for obviously harmful use-cases, such as harassment, spam, radicalization, or astroturfing. But we also know we can’t anticipate all of the possible consequences of this technology, so we are launching today in a private beta rather than general availability, building tools to help users better control the content our API returns, and researching safety-relevant aspects of language technology (such as analyzing, mitigating, and intervening on harmful bias). We’ll share what we learn so that our users and the broader community can build more human-positive AI systems.