Jump to content

Deep neural networks can now transfer the style of one photo onto another


CrAKeN

Recommended Posts

deep_learning_photos_example.0.jpg

View picture in original size

 

You’ve probably heard of an AI technique known as “style transfer” — or, if you haven’t heard of it, you’ve seen it. The process uses neural networks to apply the look and feel of one image to another, and appears in apps like Prisma and Facebook. These style transfers, however, are stylistic, not photorealistic. They look good because they look like they’ve been painted. Now a group of researchers from Cornell University and Adobe have augmented style transfer so that it can transfer the look of one photo onto another — while still looking like a photo. The results are impressive.

 

The researchers’ work is outlined in a paper called “Deep Photo Style Transfer.” Essentially, they’ve taken the methods of the original style transfer, and added another layer of neural networks to the process — a layer that makes sure that the details of the original image are preserved.

 

style_photo_transfer_example_2.jpg

View picture in original size

 

From left to right: the original image, the reference image, and the output.

 

“People are very forgiving when they see [style transfer images] in these painterly styles,” Cornell professor Kavita Bala, a co-author of the study, tells The Verge. “But with real photos there’s a stronger expectation of what we want it to look like, and that’s why it becomes an interesting challenge.”

 

The added neural network layer pays close attention to what Bala calls “local affine patches.” There’s no quick way to accurately translate this phrase, but it basically means the various edges within the image, whether that’s the border between a tree and a lake, or a building and the sky. While style transfer tends to play fast and loose with these edges, shifting them back and forth as it pleases, photo style transfer preserves them.

 

There are limits to the technique, of course. The algorithms seem to work best with structures like buildings — the flaws are more obvious with faces. And you can’t use massively different photos for transferring style, otherwise the neural networks have a tougher time analyzing elements to copy from picture to picture. “If you have a picture of a lake and you have a scene where you’re taking the style from, ideally it would also have a water body in it of some sort,” says Bala. “There’s no defined limit, but this is a good open research question. We put the code out because we want people to play with it and try it out.” (The code is available here on GitHub.)

 

Screen_Shot_2017_03_30_at_6.07.34_PM.png

View picture in original size

 

In the image marked Neural Style, you can see how ordinary style transfer mutates the sharp edges of the photo.

 

The question is, how long will it be until we start seeing these sorts of photo style transfers being made accessible to the public? After all, the original style transfer went from a first research paper to Facebook’s app, reaching hundreds of millions of users, in less than two years’ time. And with Adobe’s involvement in this paper, there’s obviously an expectation that it’s at least a little bit interested in some sort of commercialization. We’ve reached out to the company to find out more, and will update if and when we hear back.

 

For now, though, the researchers are already thinking about what areas photorealistic style transfer could be applied to next. “The question of how far you can push it is important,” says Bala. “Video is a logical thing for it to go to, and that, I expect, will happen.”

 

Source

Link to comment
Share on other sites


  • Views 405
  • Created
  • Last Reply

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...