Facial recognition breakthrough: 'Deep Dense' software spots faces in images even if they're partially hidden or UPSIDE DOWN
comments
Picking faces out of a crowd is something humans are hardwired to do, but training computers to act in the same way is much more difficult.
There have been various breakthroughs in this field in recent months, but the latest could be the most significant yet.
Researchers from Yahoo Labs and Stanford University have developed an algorithm that can identify faces from various different angles, when part of the face is hidden and even upside down.
The Deep Dense Face Detector algorithm was built by Yahoo Labs in California and Stanford University. The researchers used a form of machine learning known as a deep convolutional neural network to train a computer to spot facial features (pictured) in a database of images
At the moment, the so-called Deep Dense Face Detector doesn't recognise who the individual faces belong to, just that there is a face.
But the technology has the potential to be trained in this way.
The algorithm was built by Sachin Farfade and Mohammad Saberian at Yahoo Labs in California and Li-Jia Li at Stanford University.
It built on the Viola-Jones algorithm which spots front-facing people in images by picking out key facial features such as a vertical nose and shadows around the eyes.
By collecting these markers, the algorithm is able to determine if an image contains a face or not.
But, this did not account for faces that have been obscured, are looking in various directions, or were upside down.
With this in mind, Mr Farfade and his team used a form of machine learning known as a deep convolutional neural network.
This involves training a computer to recognise elements of images from a database using various layers.
Google used a similar technique for its recent GoogLeNet classification algorithm that can identify images within images, such as a hat on the head of a dog sat on a bench.
Mr Farfade trained his algorithm using a database of 200,000 images featuring faces shown at various angles and orientations, plus 20 million images that didn't contain faces.
In their paper, the researchers said: 'In this paper we propose a method based on deep learning, called Deep Dense Face Detector.'
'It has minimal complexity...and can get similar or better performance [than other systems] while it does not require annotation or information about facial landmarks.'
And, the team said the technology could be improved following further training.
The algorithm (recognition pictured left) can identify faces from various different angles, when part of the face is hidden and even upside down (example pictured right). At the moment, the technology doesn't recognise who a face belongs to, but could be trained to do so
Google used a similar neural network technique for its recent GoogLeNet classification algorithm that can identify images within images, such as a hat on the head of a dog (pictured)
Facebook's Deep Face tool also used this neural network technique to help recognise users in photos.
Its algorithm identifies faces 'as accurately as a human' and offers tag suggestions which the user can accept, or reject.
The technology was first showcased last March, but the site has now started rolling out the automatic tagging tool to select users.
DeepFace uses technology designed by an Israeli startup called face.com.
Google's technology is so accurate it can locate and distinguish between a range of object sizes within a single image, and it can also determine an object within, or on top of, an object, within the photo (pictured)
Facebook's Deep Face tool also used this neural network technique to help recognise users in photos. Its algorithm identifies faces 'as accurately as a human' and uses a 3D model to virtually rotate faces so they are facing the camera. Image (a) shows the original image and (g) shows the final, corrected version
Facebook bought the startup in 2013 and developed the facial recognition tool with support from face.com's Yaniv Taigman at its Artificial Intelligence lab.
The researchers used the software to build a 3D model of a face from a photo that can be rotated into the best position for an algorithm to begin searching for a match.
After creating a model, the team used a neural network that had been trained on a database of faces to try and match the face with one in a test database of more than 4 million images, containing more than 4,000 separate identities, each one labelled by humans.
Its creators said DeepFace finds a match with 97.25 per cent accuracy.
The tagging option has now started appearing in the privacy settings of accounts globally - although in many cases it still says the feature is 'unavailable' - and a number of users have reportedly been given the tool.
Facebook uses Deep Face to offer tag suggestions which the user can accept, or reject. The tagging option (pictured bottom) has started appearing in the settings of accounts globally - although in many cases it says the feature is 'unavailable' - and a number of users have reportedly been given the tool
Security researcher Lee Munson said: 'The social network plans to use the system to identify its users in new photos as they are uploaded. If your visage appears in one of the 400 million pictures added to the network each day you'll receive an email from Facebook alerting you'
Put the internet to work for you.
Recommended for you |
0 comments:
Post a Comment