Curated for content, computing, and digital experience professionals

Day: February 6, 2024

Join Bluesky today (bye, invites!)

Via the Bluesky Blog…

Bluesky is building an open social network where anyone can contribute, while still providing an easy-to-use experience for users. For the past year, we used invite codes to help us manage growth while we built features like moderation tooling, custom feeds, and more. Now, we’re ready for anyone to join.

To mark the occasion, we teamed up with Davis Bickford, an artist on the network, to share why we’re excited about Bluesky. And if deep dives are more your style, we worked with Martin Kleppman, author of Designing Data-Intensive Applications and technical advisor to Bluesky, to write a paper that goes into more detail about the technical underpinnings of Bluesky.

In the coming weeks, we’re excited to release the labeling services which will allow users to stack more options on top of their existing moderation preferences. This will allow other organizations and people to run their own moderation services that can account for industry-specific knowledge or specific cultural norms, among other preferences.

When you log in to Bluesky, it might look and feel familiar — the user experience should be straightforward. But under the hood, we’ve designed the app in a way that puts control back in your hands. Here, your experience online isn’t controlled by a single company.

This month, we’ll be rolling out an experimental early version of “federation,” or the feature that makes the network so open and customizable.

https://bsky.social/about/blog/02-06-2024-join-bluesky

Apple releases AI model for instruction-based image editing

via The Verge…

Apple released an open-source AI model, called “MGIE,” that can edit images based on natural language instructions. MGIE (MLLM-Guided Image Editing), leverages multimodal large language models (MLLMs) to interpret user commands and perform pixel-level manipulations. The model can handle various editing aspects, such as Photoshop-style modification, global photo optimization, and local editing. MGIE is the result of a collaboration with researchers from the University of California, Santa Barbara.

MGIE integrates MLLMs into the image editing process in two ways: First, it uses MLLMs to derive expressive instructions from user input. For example, given the input “make the sky more blue”, MGIE can produce the instruction “increase the saturation of the sky region by 20%.”

Second, it uses MLLMs to generate visual imagination, a latent representation of the desired edit. This representation captures the essence of the edit and can be used to guide the pixel-level manipulation. MGIE’s training scheme jointly optimizes the instruction derivation, visual imagination, and image editing modules.

MGIE is available as an open-source project on GitHub. The project provides a demo notebook that shows how to use MGIE for various editing tasks. Users can also try out MGIE through a web demo at Hugging Face Spaces.

https://venturebeat.com/ai/apple-releases-mgie-a-revolutionary-ai-model-for-instruction-based-image-editing/

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑