Skip to main content

A.I. could help cameras see in candlelight, research suggests

CVPR 2018: Learning to See in the Dark

Low-light photography is a balance between blur and noise — but what if artificial intelligence could even out the score? Researchers from the University of Illinois Urbana-Champaign and Intel have trained a program to process low-noise images in a room lit with a single candle. By feeding a RAW image processor two identical shots, one a short exposure and one a long exposure, the group managed to get an image with less noise and without the odd color casts alternative methods provide. With additional research, the processing algorithms could help cameras take images with less noise without using a longer shutter speed.

To train what the group calls the See-in-the-Dark data set, the researchers took two different images in limited light. Using a remote app to control the camera without touching it, the group took a properly exposed long exposure image from ten to 30 seconds long. The researchers then took a second image with a short exposure from  0.1 to 0.03 seconds long, which typically created an image that was almost entirely black.

Repeating this process around 5,000 different times, some with a Sony a7S II and some with a Fujifilm X-T2, the researchers then used the paired images to train a neural network. The images were first processed by separating into different color channels, removing the black and reducing the image’s resolution. The data set also used RAW data from the camera sensor, not processed JPEGs.

The algorithms created from the data set, when used on RAW sensor data, created brighter images with less noise compared to other methods of handling the camera data, like demosaicing. The resulting images also had a more accurate white balance than current methods. The results improve on traditional image processing, the researchers said, and warrant more research.

The enhanced processing method could help smartphones perform between in low light, along with enhancing handheld shots from DSLRs and mirrorless cameras, the group suggests. Video could also benefit, since taking a longer exposure isn’t possible while maintaining a standard frame rate.

While the sample images from the program are impressive, the processing was only tested on stationary subjects. Image processing was also slower than current standards — the images took 0.38 and 0.66 seconds to process at a reduced resolution, too slow to maintain the burst speeds on current cameras. The group’s data set was also designed for a specific camera sensor — without additional research on data sets for multiple sensors, the process would have to be repeated for each new camera sensor. The researchers suggested that future research could look into those limitations.

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
Nvidia lowers the barrier to entry into A.I. with Fleet Command and LaunchPad
laptop running Nvidia Fleet Command software.

Nvidia is expanding its artificial intelligence (A.I.) offerings as part of its continued effort to "democratize A.I." The company announced two new programs today that can help businesses of any size to train and deploy A.I. models without investing in infrastructure. The first is A.I. LaunchPad, which gives enterprises access to a stack of A.I. infrastructure and software, and the second is Fleet Command, which helps businesses deploy and manage the A.I. models they've trained.

At Computex 2021, Nvidia announced the Base Command platform that allows businesses to train A.I. models on Nvidia's DGX SuperPod supercomputer.  Fleet Command builds on this platform by allowing users to simulate A.I. models and deploy them across edge devices remotely. With an Nvidia-certified system, admins can now control the entire life cycle of A.I. training and edge deployment without the upfront cost.

Read more
Can A.I. beat human engineers at designing microchips? Google thinks so
google artificial intelligence designs microchips photo 1494083306499 e22e4a457632

Could artificial intelligence be better at designing chips than human experts? A group of researchers from Google's Brain Team attempted to answer this question and came back with interesting findings. It turns out that a well-trained A.I. is capable of designing computer microchips -- and with great results. So great, in fact, that Google's next generation of A.I. computer systems will include microchips created with the help of this experiment.

Azalia Mirhoseini, one of the computer scientists of Google Research's Brain Team, explained the approach in an issue of Nature together with several colleagues. Artificial intelligence usually has an easy time beating a human mind when it comes to games such as chess. Some might say that A.I. can't think like a human, but in the case of microchips, this proved to be the key to finding some out-of-the-box solutions.

Read more
Read the eerily beautiful ‘synthetic scripture’ of an A.I. that thinks it’s God
ai religion bot gpt 2 art 4

Travis DeShazo is, to paraphrase Cake’s 2001 song “Comfort Eagle,” building a religion. He is building it bigger. He is increasing the parameters. And adding more data.

The results are fairly convincing, too, at least as far as synthetic scripture (his words) goes. “Not a god of the void or of chaos, but a god of wisdom,” reads one message, posted on the @gods_txt Twitter feed for GPT-2 Religion A.I. “This is the knowledge of divinity that I, the Supreme Being, impart to you. When a man learns this, he attains what the rest of mankind has not, and becomes a true god. Obedience to Me! Obey!”

Read more