Skip to main content

Apple opens digital journal to showcase its machine learning developments

apple journal machine learning development machinelearning
Jakub Jirsak/123RF
Apple opened a new digital journal to showcase some of the developments it is making in the field of machine learning. In the first entry, it explains what it is doing to help improve the realism of synthetic images, which can, in turn, be used to teach algorithms how to classify images, without needing to painstakingly label them manually.

One of the biggest hurdles in artificial intelligence is teaching it things that humans take for granted. While you could conceivably hand-program an AI to understand everything, that would take a very, very long time and would be nigh on impossible to power. Instead, machine learning lets us teach algorithms much like you would a human, but that requires specialist techniques.

Apple

When it comes to teaching how to classify images, synthetic images can be used, but as Apple points out in its first blog post, that can lead to poor generalizations, because of the low quality of a synthetic image. That is why it’s been working on developing better, more detailed images for machines to learn from.

Although this is far from a new technique, it has traditionally been a costly one. Apple developed a much more economical “refiner” which is able to look at unlabeled real images and reference them to refine synthetic images into something much closer to reality.

However, how do you select the correct real image to give the refiner a strong source material to base its refinements on? That requires a secondary image identifier, known as the discriminator. It goes back and forth with the refiner attempting to “trick” the discriminator by gradually building up the synthetic image until it possesses far more of the details of the real images. Once the discriminator can no longer properly categorize them, the simulation halts and moves on to a new image.

Apple

This teaches both the discriminator and the refiner while they compete, thereby gradually enhancing the tools as they build up a strong library of detailed synthetic images.

The learning process is a detailed one, with Apple going to great lengths to preserve original aspects of images while avoiding the artifacts that can build up during image processing. It is worth it though, as further testing has shown vastly improved performance for image categorization based on refined synthetic images, especially when they have been refined multiple times.

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
Meta made DALL-E for video, and it’s both creepy and amazing
A video created via AI, featuring a creature typing in a hat.

Meta unveiled a crazy artificial intelligence model that allows users to turn their typed descriptions into video. The system is called Make-A-Video and is the latest in a trend of AI generated content on the web.

The system accepts short descriptions like "a robot surfing a wave in the ocean” or "clown fish swimming through the coral reef" and dynamically generates a short GIF of the description. There are even three different styles of videos to choose from: surreal, realistic, and stylized.

Read more
I pitched my ridiculous startup idea to a robot VC
pitched startup to robot vc waterdrone

Aqua Drone. HighTides. Oh Water Drone Company. H2 Air. Drone Like A Fish. Whatever I called it, it was going to be big. Huge. Well, probably.

It was the pitch for my new startup, a company that promised to deliver one of the world’s most popular resources in the most high-tech way imaginable: an on-demand drone delivery service for bottled water. In my mind I was already picking out my Gulfstream private jet, bumping fists with Apple’s Tim Cook, and staging hostile takeovers of Twitter. I just needed to convince a panel of venture capitalists that I (and they) were onto a good thing.

Read more
Optical illusions could help us build the next generation of AI
Artificial intelligence digital eye closeup.

You look at an image of a black circle on a grid of circular dots. It resembles a hole burned into a piece of white mesh material, although it’s actually a flat, stationary image on a screen or piece of paper. But your brain doesn’t comprehend it like that. Like some low-level hallucinatory experience, your mind trips out; perceiving the static image as the mouth of a black tunnel that’s moving towards you.

Responding to the verisimilitude of the effect, the body starts to unconsciously react: the eye’s pupils dilate to let more light in, just as they would adjust if you were about to be plunged into darkness to ensure the best possible vision.

Read more