Jump to content

Search the Community

Showing results for tags 'artificial intelligence'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 47 results

  1. ThisPersonDoesNotExist.com uses AI to generate endless fake faces Hit refresh to lock eyes with another imaginary stranger A few sample faces — all completely fake — created by ThisPersonDoesNotExist.com The ability of AI to generate fake visuals is not yet mainstream knowledge, but a new website — ThisPersonDoesNotExist.com— offers a quick and persuasive education. The site is the creation of Philip Wang, a software engineer at Uber, and uses research released last year by chip designer Nvidia to create an endless stream of fake portraits. The algorithm behind it is trained on a huge dataset of real images, then uses a type of neural network known as a generative adversarial network (or GAN) to fabricate new examples. “Each time you refresh the site, the network will generate a new facial image from scratch,” wrote Wang in a Facebook post. He added in a statement to Motherboard: “Most people do not understand how good AIs will be at synthesizing images in the future.” The underlying AI framework powering the site was originally invented by a researcher named Ian Goodfellow. Nvidia’s take on the algorithm, named StyleGAN, was made open source recently and has proven to be incredibly flexible. Although this version of the model is trained to generate human faces, it can, in theory, mimic any source. Researchers are already experimenting with other targets. including anime characters, fonts, and graffiti. As we’ve discussed before at The Verge, the power of algorithms like StyleGAN raise a lot of questions. On the one hand there are obvious creative applications for this technology. Programs like this could create endless virtual worlds, as well as help designers and illustrators. They’re already leading to new types of artwork. Then there are the downsides. As we’ve seen in discussions about deepfakes (which use GANs to paste people’s faces onto target videos, often in order to create non-consensual pornography), the ability to manipulate and generate realistic imagery at scale is going to have a huge effect on how modern societies think about evidence and trust. Such software could also be extremely useful for creating political propaganda and influence campaigns. In other words, ThisPersonDoesNotExist.com is just the polite introduction to this new technology. The rude awakening comes later. Source
  2. Google and Microsoft warn investors that bad AI could harm their brand As AI becomes more common, companies’ exposure to algorithmic blowback increases Illustration by Alex Castro / The Verge For companies like Google and Microsoft, artificial intelligence is a huge part of their future, offering ways to enhance existing products and create whole new revenue streams. But, as revealed by recent financial filings, both firms also acknowledge that AI — particularly biased AI that makes bad decisions — could potentially harm their brands and businesses. These disclosures, spotted by Wired, were made in the companies’ 10-K forms. These are standardized documents that firms are legally required to file every year, giving investors a broad overview of their business and recent finances. In the segment titled “risk factors,” both Microsoft and Alphabet, Google’s parent company, brought up AI for the first time. From Alphabet’s 10-K, filed last week: These disclosures are not, on the whole, hugely surprising. The idea of the “risk factors” segment is to keep investors informed, but also mitigate future lawsuits that might accuse management of hiding potential problems. Because of this they tend to be extremely broad in their remit, covering even the most obvious ways a business could go wrong. This might include problems like “someone made a better product than us and now we don’t have any customers,” and “we spent all our money so now don’t have any”. But, as Wired’s Tom Simonite points out, it is a little odd that these companies are only noting AI as a potential factor now. After all, both have been developing AI products for years, from Google’s self-driving car initiative, which began in 2009, to Microsoft’s long dalliance with conversational platforms like Cortana. This technology provides ample opportunities for brand damage, and, in some cases, already has. Remember when Microsoft’s Tay chatbot went live on Twitter and started spouting racist nonsense in less than a day? Years later, it’s a still regularly cited as an example of AI gone wrong. However, you could also argue that public awareness of artificial intelligence and its potential adverse affects has grown hugely over the past year. Scandals like Google’s secret work with the Pentagon under Project Maven, Amazon’s biased facial recognition software, and Facebook’s algorithmic incompetence with the Cambridge Analytica scandal have all brought the problems of badly-implemented AI into the spotlight. (Interestingly, despite similar exposure, neither Amazon nor Facebook mention AI risk in their latest 10-Ks.) And Microsoft and Google are doing more than many companies to keep abreast of this danger. Microsoft, for example, is arguing that facial recognition software needs to be regulated to guard against potential harms, while Google has started the slow business of engaging with policy makers and academics about AI governance. Giving investors a head’s up too only seems fair. Source
  3. Habana, the AI chip innovator, promises top performance and efficiency Habana is the best kept secret in AI chips. Designed from the ground up for machine learning workloads, it promises superior performance combined with power efficiency to revolutionize everything from data centers in the cloud to autonomous cars. As data generation and accumulation accelerates, we've reached a tipping point where using machine learning just works. Using machine learning to train models that find patterns in data and make predictions based on those is applied to pretty much everything today. But data and models are just one part of the story. Also: The AI chip unicorn that's about to revolutionize everything Another part, equally important, is compute. Machine learning consists of two phases: Training and inference. In the training phase patterns are extracted, and machine learning models that capture them are created. In the inference phase, trained models are deployed and fed with new data in order to generate results. Both of these phases require compute power. Not just any compute in fact, as it turns out CPUs are not really geared towards the specialized type of computation required for machine learning workloads. GPUs are currently the weapon of choice when it comes to machine learning workloads, but that may be about to change. AI CHIPS JUST GOT MORE INTERESTING GPU vendor Nvidia has reinvented itself as an AI chip company, coming up with new processors geared specifically towards machine learning workloads and dominating this market. But the boom in machine learning workloads has whetted the appetite of others players, as well. Also: AI chips for big data and machine learning Cloud vendors such as Google and AWS are working on their own AI chips. Intel is working on getting FPGA chips in shape to support machine learning. And upstarts are having a go at entering this market as well. GraphCore is the most high profile among them, with recent funding having catapulted it into unicorn territory, but it's not the only one: Enter Habana. Habana has been working on its own processor for AI since 2015. But as Eitan Medina, its CBO told us in a recent discussion, it has been doing so in stealth until recently: "Our motto is AI performance, not stories. We have been working under cover until September 2018". David Dahan, Habana CEO, said that "among all AI semiconductor startups, Habana Labs is the first, and still the only one, which introduced a production-ready AI processor." As Medina explained, Habana was founded by CEO David Dahan and VP R&D Ran Halutz. Both Dahan and Halutz are semi-conductor industry veterans, and they have worked together for years in semiconductor companies CEVA and PrimeSense. The management team also includes CTO Shlomo Raikin, former Intel project architect. Medina himself also has an engineering background: "Our team has a deep background in machine learning. If you Google topics such as quantization, you will find our names," Medina said. And there's no lack of funding or staff either. Habana just closed a Round B financing round of $75 million, led by Intel Capital no less, which brings its total funding to $120 million. Habana has a headcount of 120 and is based in Tel Aviv, Israel, but also has offices and R&D in San Jose, US, Gdansk, Poland, and Beijing, China. This looks solid. All these people, funds, and know-how have been set in motion by identifying the opportunity. Much like GraphCore, Habana's Medina thinks that the AI chip race is far from over, and that GPUs may be dominating for the time being, but that's about to change. Habana brings two key innovations to the table: Specialized processors for training and inference, and power efficiency. SEPARATING TRAINING AND INFERENCE TO DELIVER SUPERIOR PERFORMANCE Medina noted that starting with a clean sheet to design their processor, one of the key decisions made early on was to address training and inference separately. As these workloads have different needs, Medina said that treating them separately has enabled them to optimize performance for each setting: "For years, GPU vendors have offered new versions of GPUs. Now Nvidia seems to have realized they need to differentiate. We got this from the start." Also: AI Startup Gyrfalcon spins plethora of chips for machine learning Habana offers two different processors: Goya, addressing inference; and Gaudi, addressing training. Medina said that Goya is used in production today, while Gaudi will be released in Q2 2019. We wondered what was the reason inference was addressed first. Was it because the architecture and requirements for inference are simpler? Medina said that it was a strategic decision based on market signals. Medina noted that the lion's share of inference workloads in the cloud still runs on CPUs. Therefore, he explained, Habana's primary goal at this stage is to address these workloads as a drop-in replacement. Indeed, according to Medina, Habana's clients at this point are to a large extent data center owners and cloud providers, as well as autonomous cars ventures. The value proposition in both cases is primarily performance. According to benchmarks published by Habana, Goya is significantly faster than both Intel's CPUs and Nvidia's GPUs. Habana used the well-known RES-50 benchmark, and Medina explained the rationale was that RES-50 is the easiest to measure and compare, as it has less variables. Medina said other architectures must make compromises: "Even when asked to give up latency, throughput is below where we are. With GPUs / CPUs, if you want better performance, you need to group data input in big groups of batches to feed the processor. Then you need to wait till entire group is finished to get the results. These architectures need this, otherwise throughput will not be good. But big batches are not usable. We have super high efficiency even with small batch sizes." There are some notable points about these benchmarks. The first, Medina pointed out, is that their scale is logarithmic, which is needed to be able to accommodate Goya and the competition in the same charts. Hence the claim that "Habana smokes inference incumbents." The second is that results become even more interesting if power efficiency is factored in. POWER EFFICIENCY AND THE SOFTWARE STACK Power efficiency is a metric used to measure how much power is needed per calculation in benchmarks. This is a very important parameter. It's not enough to deliver superior performance alone, the cost of delivering this is just as important. A standard metric to measure processor performance is IPS, Instructions Per Second. But IPS/W, or IPS per Watt, is probably a better one, as it takes into account the cost of delivering performance. Also: Alibaba to launch own AI chip next year Higher power efficiency is better in every possible way. Thinking about data centers and autonomous vehicles, minimizing the cost of electricity, and increasing autonomy are key requirements. And in the bigger picture, lowering carbon footprint, is a major concern for the planet. As Medina put it, "You should care about the environment, and you should care about your pocket." Goya's value proposition for data centers is focused on this, also factoring in latency requirements. As Medina said, for a scenario of processing 45K images/second, three Goya cards can get results with a latency of 1,3 msec, replacing 169 CPU servers with a latency of 70 msec plus 16 Nvidia Tesla V100 with a latency of 2,5 msec with a total cost around $400,000. The message is clear: You can do more with less. TPC, Habana's Tensor Processor Core at the heart of Goya, supports different form factors, memory configurations, and PCIe cards, as well as mixed-precision numeric. It is also programmable in C, and accessible via what Habana calls the GEMM engine (General Matric Multiplication). This touches upon another key aspect of AI chips: The software stack, and integrations with existing machine learning frameworks. As there is a slew of machine learning frameworks people use to build their models, supporting as many of them as seamlessly as possible is a key requirement. Goya supports models trained on any processor via an API called SynapseAI. At this point, SynapseAI supports TensorFlow, mxnet and ONNX, an emerging exchange format for deep learning models, and is working on adding support for PyTorch, and more. Users should be able to deploy their models on Goya without having to fiddle with SynapseAI. For those who wish to tweak their models to include customizations, however, the option to do so is there, as well as IDE tools to support them. Medina said this low-level programming has been requested by clients who have developed custom ways of maximizing performance on their current setting and would like to replicate this on Goya. THE BIGGER PICTURE So, who are these clients, and how does one actually become a client? Medina said Habana has a sort of screening process for clients, as they are not yet at the point where they can ship massive quantities of Goya. Habana is sampling Goya to selected companies only at this time. That's what's written on the form you'll have to fill in if you're interested. Also: AI Startup Cornami reveals details of neural net chip Not that Goya is a half-baked product, as it is used in production according to Medina. Specific names were not discussed, but yes, these include cloud vendors, so you can let your imagination run wild. Medina also emphasized its R&D on the hardware level for Goya is mostly done. However, there is ongoing work to take things to the next level with 7 nanometer chips, plus work on the Gaudi processor for training, which promises linear scalability. In addition, development of the software stack never ceases in order to optimize, add new features and support for more frameworks. Recently, Habana also published open source Linux drivers for Goya, which should help a lot considering Linux is what powers most data centers and embedded systems. Habana, just like GraphCore, seems to have the potential to bring about a major disruption in the AI chip market and the world at large. Many of its premises are similar: A new architecture, experienced team, well funded, and looking to seize the opportunity. One obvious difference is on how they approach their public image, as GraphCore has been quite open about their work, while Habana was a relative unknown up to now. And the obvious questions -- which one is faster/better, which one will succeed, can they dethrone Nvidia -- we simply don't know. GraphCore has not published any benchmarks. Judging from an organization maturity point of view, Habana seems to be lagging at this point, but that does not necessarily mean much. One thing we can say is that this space is booming, and we can expect AI chip innovation to catalyze AI even further soon. The takeaway from this, however, should be to make power efficiency a key aspect of the AI narrative going forward. Performance comes at a price, and this should be factored in. Source
  4. Can AI help crack the code of fusion power? ‘It’s sort of this beautiful synergy between the human and the machine’ Part ofThe Real-World AI Issue With the click of a mouse and a loud bang, I blasted jets of super-hot, ionized gas called plasma into one another at hundreds of miles per second. I was sitting in the control room of a fusion energy startup called TAE Technologies, and I’d just fired its $150 million plasma collider. That shot was a tiny part of the company’s long pursuit of a notoriously elusive power source. I was at the company’s headquarters to talk to them about the latest phase of their hunt that involves an algorithm called the Optometrist. Nuclear fusion is the reaction that’s behind the Sun’s energetic glow. Here on Earth, the quixotic, expensive quest for controlled fusion reactions gets a lot of hype and a lot of hate. (To be clear, this isn’t the same process that happens in a hydrogen bomb. That’s an uncontrolled fusion reaction.) The dream is that fusion power would mean plenty of energy with no carbon emissions or risk of a nuclear meltdown. But scientists have been pursuing fusion power for decades, and they are nowhere near adding it to the grid. Last year, a panel of advisers to the US Department of Energy published a list of game-changers that could “dramatically increase the rate of progress towards a fusion power plant.” The list included advanced algorithms, like artificial intelligence and machine learning. It’s a strategy that TAE Technologies is banking on: the 20-year-old startup began collaborating with Google a few years ago to develop machine learning tools that it hopes will finally bring fusion within reach. Norman, TAE’s $150 million plasma collider. Photo by Brennan King and Weston Reel Attempts at fusion involve smacking lightweight particles into one another at temperatures high enough that they fuse together, producing a new element and releasing energy. Some experimentscontrol a super-hot ionized gas called plasma with magnetic fields inside a massive metal doughnut called a tokamak. Lawrence Livermore National Laboratory fires the world’s largest laser at a tiny gold container with an even tinier pellet of nuclear fuel inside. TAE twirls plasma inside a linear machine named Norman, tweaking thousands of variables with each shot. It’s impossible for a person to keep all of those variables in their head or to change them one at a time. That’s why TAE is collaborating with Google, using a system called the Optometrist algorithm that helps the team home in on the ideal conditions for fusion. We weren’t sure what to make of all the hype surrounding AI or machine learning or even fusion energy for that matter. So the Verge Science video team headed to TAE’s headquarters in Foothill Ranch, California, to see how far along it is, and where — if anywhere — AI entered the picture. You can watch what we found in the video above. Ultimately, we found a lot of challenges but a lot of persistent optimism, too. “The end goal is to have power plants that are burning clean fuels that are abundant, [and] last for as long as humanity could last,” says Erik Trask, a lead scientist at TAE. “Now, we think that we have found a way to do it, but we have to show it. That’s the hard part.” Source
  5. Could This Technology Make Amazon Go Stores Obsolete? A start-up company is promising to offer similar artificial intelligence tools that are scalable for larger retailers. (AMZN) is revolutionizing how we shop with its cashierless Amazon Go convenience stores, and hopes to expand the concept from fewer than 10 now into a network of 3,000 stores in only two years. There may be logistical issues in building out so many stores in such a short time, as well as political ones if states decide all stores must accept cash. But the biggest challenge may be competing technology that allows more retailers to offer their own no-cashier, no-checkout shopping. It could make the Amazon Go chain obsolete. IMAGE SOURCE: AMAZON.COM. The dawn of smart carts Amazon Go's "just walk out" shopping experience requires a store to be outfitted with machine vision, deep-learning algorithms, and an array of cameras and sensors to watch a customer's every move. It knows what every item is, and when it's been picked up and put back, so it can charge a shopper's account. A start-up called Caper is offering similar technology that is more accessible to more retailers. Rather than outfit an entire store with such advanced artificial intelligence (AI), Caper puts it into individual shopping carts. It lets supermarkets more easily compete against Amazon without the massive cost necessary to build new stores or retrofit a chain's existing infrastructure. Shoppers put items into AI-powered carts, which identify the products and ring up the total. Interactive screens on the carts not only keep a running tally of the order, but can also direct shoppers to in-store specials. The technology used by Caper's partners does currently require customers to scan each item into the shopping cart screens, but they're using it to train the deep learning algorithm to enable shopping without scanning. When they are finished shopping, customers can pay via the screen and leave. While bagging up a week's worth of groceries could slow you down, this system seems likely to encourage shoppers to bring their own bags into the store and fill them up as they go. That's often how the scan-and-go technologies work so you're not stuck at the register still having to bag all your items. Amazon Go's limitations The original Amazon Go location was only 1,800 square feet, about the size of a convenience store, but it was estimated to cost at least $1 million to install all the cameras and sensors. Analysts estimate that to build out 3,000 stores would cost some $3 billion. While Amazon is testing a range of store sizes, some as large as 2,400 square feet, it's clear the concept is prohibitively expensive to apply to something on the order of a full-sized supermarket and is why they're mostly stocked with convenience items that you can literally grab-and-go. A Walmart store can run anywhere from 30,000 square feet to over 200,000 for one of its supercenters. Even Amazon's Whole Foods Markets average 40,000 square feet. Retrofitting one of these locations would also seem to be nearly impossible since all-new shelving and displays would be needed to incorporate the cameras and sensors. Many grocery chains like Kroger have added so-called scan-and-go capabilities, where customers use a scanning gun or their smartphones to scan in each item they buy, and then upload the data to a register at checkout. But it's a clunkier system than Amazon's effortless grab-and-go technology; Walmart killed off its own scan-and-go system because it said shoppers weren't using it. Scaling up artificial intelligence Caper says its cart technology is currently in place at two grocery store chains, with plans to roll it out to 150 more this year. The company's website lists six retail partners from the New York area including C-Town, Key Food, and Pioneer supermarkets. It says its shopping carts have increased the value of the average basket by 18% because customers are exposed to products they might otherwise miss or can't find. Amazon Go is certainly poised to upend the convenience store industry with its advanced AI wizardry. But Caper could revolutionize the much broader and larger supermarket sector and make Amazon's stores obsolete by bringing the same sort of technology to your neighborhood grocery store in a package that's far more scalable. Source
  6. In the coming months, seeing won't be believing thanks to increasingly convincing computer-generated deepfake videos David Doran Breaking news: a video “leaks” of a famous terrorist leader meeting in secret with an emissary from a country in the Middle East. News organisations air the video as an “unconfirmed report”. American officials can neither confirm nor deny the authenticity of the video – a typically circumspect answer on intelligence matters. The US president condemns the country that would dare hold secret meetings with this reviled terrorist mastermind. Congress discusses imposing sanctions. A diplomatic crisis ensues. Perhaps, seeing an opportunity to harness public outrage, the president orders a cruise missile strike on the last known location of the terrorist leader. All of this because of a few seconds of film – a masterful fabrication. In 2019, we will for the first time experience the geopolitical ramifications of a new technology: the ability to use machine learning to falsify video, imagery and audio that convincingly replicates real public figures. “Deepfakes” is the term becoming shorthand for a broad range of manipulated video and imagery, including face swaps (identity swapping), audio deepfakes (voice swapping), deepfake puppetry (mapping a target’s face to an actor’s for facial reenactment), and deepfake lip-synching (synthetic video created to match an audio file and footage of their face). This term was coined in December 2017 by a Reddit user of the same name, who used open-source artificial intelligence tools to paste celebrities’ faces on to pornographic video clips. A burgeoning community of online deepfake creators followed suit. Deepfakes will continue to improve in ease and sophistication as developers create better AI and new techniques that make it easier to create falsified videos. The telltale signs of a faked video – subjects not blinking, flickering of the facial outline, over-centralised facial features – will become less obvious and, eventually, imperceptible. Ultimately, maybe in a matter of a few years, it will be possible to synthetically generate footage of people without relying on any existing footage. (Current deepfakes need stock footage to provide the background for swapped faces.) Perpetrators of co-ordinated online disinformation operations will gladly incorporate new, AI-powered digital impersonations to advance political goals, such as bolstering support for a military campaign or to sway an electorate. Such videos may also be used simply to undermine public trust in media. Aside from geopolitical meddling or disinformation campaigns, it’s easy to see how this technology could have criminal, commercial applications, such as manipulating stock prices. Imagine a rogue state creating a deepfake that depicts a CEO and CFO furtively discussing missing expectations in the following week’s quarterly earnings call. Before releasing the video to a few journalists, they would short the stock – betting on the stock price plummeting when the market overreacts to this “news”. By the time the video is debunked and the stock market corrects, the perpetrator has already made away with a healthy profit. Perhaps the most chilling realisation about the rise of deepfakes is that they don’t need to be perfect to be effective. They need to be just good enough that the target audience is duped for just long enough. That’s why human-led debunking and the time it requires will not be enough. To protect people from the initial deception, we will need to develop algorithmic detection capabilities that can work in real time, and we need to conduct psychological and sociological research to understand how online platforms can best inform people that what they’re watching is fabricated. In 2019, the world will confront a new generation of falsified video deployed to deceive entire populations. And we may not realise the video is fake until we’ve already reacted – maybe overreacted. Indeed, it may take such an overreaction for us all to consider how we relate to fast-moving information online, not only from a technological and platform point of view, but from the perspective of everyday citizens and the mistaken assumption that “seeing is believing”. Source
  7. Artificial intelligence (AI) is expected to grow in the cybersecurity marketplace, likely to $18.2 billion by 2023, according to a report from P&S Market Research. AI is still only in its nascent stages, though, and the technologies present several obstacles that organizations must overcome, according to a new white paper by Osterman Research. In a survey sponsored by ProtectWise, Osterman Research learned that AI has penetrated the security operations center (SOC), but there are many challenges that stand in the way of AI being able to deliver on its promises. The survey found that AI has already established a strong foothold, with 73% of respondents reporting that they have implemented security solutions with at least some aspect of AI. Most organizations said their top reasons for incorporating AI were to improve the efficiency of security staff members and to make alert investigations faster. In addition, survey results concluded that 55% of IT executives are the strongest advocates for AI, while only 38% of AI’s strongest supporters identified as non-IT executives. That difference was evidenced in the reported inconsistencies from respondents who reflected on the results of their initial deployment of AI-enabled security products. Participants confessed that AI-enabled security solutions have significantly more security alerts and false positives on a typical day, with 61% of respondents agreeing that creating and implementing rules is burdensome and 52% citing they have no plan to implement additional AI-enabled security solutions in the future. More than half (61%) of all respondents said that AI doesn’t stop zero-days and advanced threats, 54% said it delivers inaccurate results and 42% said it’s difficult to use. Additionally, 71% said it’s expensive. While there is still progress that needs to be made, the survey found the future of AI has great potential. “Our bottom line assessment is that AI is not yet 'there,' but offers the promise of improving the speed of processing alerts and false positives, particularly in organizations that receive massive numbers of both. Moreover, while the full potential of AI has yet to be realized, it holds the promise of seriously addressing the cybersecurity skills shortage – it may not be a 'silver bullet,' but it may be a silver-plated one,” the survey said. source
  8. Deep inside the Earth, miles down in many cases, rock-sealed pockets hold buried treasures. These hydrocarbon reservoirs are packed with organic compounds that make the world go ‘round. When the contents are extracted and refined, the resulting oil and gas help light cities, transport people and run industries. For some engineers at BP, Job One is locating the reservoirs. Job Two is accurately predicting what percentage of hydrocarbons are retrievable, also known as “recovery factor.” Traditionally, that task has been iterative, resource-heavy and can have an element of human bias. Data scientists, tapping their own expertise and experiences, may try six or seven different algorithms as they work to dial in the best prediction model. This can take weeks. But by using Azure Machine Learning service, BP is working to reduce the time needed to pinpoint prediction models while also boosting the productivity of its data scientists. Automated machine learning empowers customers to identify an end-to-end, machine-learning pipeline for any problem. Transform recently caught up with Manish Naik, BP’s principal for digital innovation, at his London office to learn more about the company’s new method for drilling down into its data. TRANSFORM: What is the value that BP gains by improving its recovery factor forecasts? MANISH NAIK: This prediction of recovery factor from underlying data is a crucial activity – the basis of key decisions made by the company that are potentially worth billions of dollars. This data is vast and complex, involving hundreds of geological properties or features. To complement the current ways of prediction, which tend to have some qualitative input, we decided to explore machine learning to see if we can improve prediction. We sought to answer these questions: Can we improve the quality of the prediction? Can we eliminate some of the human bias? Manish Naik. Courtesy of BP. TRANSFORM: How do your data scientists use automated machine learning? NAIK: They give it broad direction. With one line of code, it runs through different algorithms within the prediction family and the different parameter (or variable) combos that previously were manually tested by the scientists. The power of the cloud comes in here. The results are comparable to what the data scientists produced. TRANSFORM: One line of code, wow. How much time does this save them? NAIK: Depending on the amount of data, type of activity – such as the prediction or classification – and algorithm family, automated machine learning could potentially reduce the effort down from weeks to days or days to hours. TRANSFORM: How often is the prediction model that BP developed for recovery factor now used across the company? NAIK: This model is in production and used by hundreds of subject matter experts globally in BP on a daily basis. TRANSFORM: As automated machine learning becomes a core tool for BP’s data scientists, what are the larger, potential benefits for the company? NAIK: It will make data scientists more productive, which means faster time to market for machine-learning (ML) projects And as data scientists continue to use more and more of automated machine learning, they will develop trust in the output it provides. That can become a starting point for the work of our data scientists. In the future, this will form a part of a robust benchmarking process for all ML projects, thus improving quality. TRANSFORM: More broadly, in what ways do you foresee Artificial Intelligence (AI) and the cloud further reshaping the oil and gas industry? NAIK: Oil and gas companies across the value chain – from exploration to retail – generate significant amounts of data. This means there are lots of opportunities to exploit this data using AI, ML and cloud technologies. In broad terms, there is significant potential for these technologies to help improve the efficiency of our operations and help us make better, more accurate and informed decisions. source
  9. When Alice fell down the rabbit hole, she got everything she asked for and more; cute little rabbits with gloves on their hands, caterpillars that could talk, and mean, nasty flowers that thought she was a weed. It’s a comical story, but one that could become reality in the future. The expression of “going down the rabbit hole,” is sometimes used to describe just how far we are willing to push the limits. On a basic literary level, it speaks of the beginning of a fanciful adventure that we cannot understand — but one that will change our lives forever. It is with this in mind that computer scientists warn of a potential danger we may not even be aware of: Our tinkering with artificial intelligence could lead to an external brain or A.I. system that we will no longer have the ability to control. A recent editorial published on TechnologyReview.com — MIT’s resource for exploring new technologies — warned of the pace in which we are advancing technology. Recent algorithms are being designed at such a remarkable speed that even its creators are astounded. “This could be problem,” Will Knight, the author of the report writes. Knight describes 2016’s milestone of a self-driving car which was quietly released in New Jersey. Chip maker Nvidia differentiated its model from other companies such as Google, Tesla, or General Motors by having the car rely entirely on an algorithm that taught itself how to drive after “watching” a human do it. Nvidia’s car successfully taught itself how to drive, much to the delight of the company’s scientists. Nevertheless, Nvidia’s programmers were unsettled by how much (and how fast) the algorithm learned the skill. Clearly, the system was able to gather information and translate it into tangible results, yet exactly how it did this was not known. The system was designed so that information from the vehicle’s sensors was transmitted into a huge network of artificial neurons which would then process the data and deliver an appropriate command to the steering wheel, brakes, or other systems. These are responses that match a human driver. Though what would happen if the car did something totally unexpected — say, smash into a tree or run a red light? There are complex behaviors and actions that could potentially happen, and the very scientists who made the system struggle to come up with an answer. AI is learning…and it’s learning pretty darn fast Nvidia’s underlying A.I. technology is based on the concept of “deep learning,” which, up till now, scientists were not sure could be applied to robots. The theory of an external or artificial “thinking” brain is nothing new. This has colored our imaginations since the 1950’s. The sore lack of materials and gross manual labor needed to input all the data, however, have prevented the dream from coming to fruition. Nevertheless, advancements in technology have resulted in several breakthroughs, including the Nvidia self-driving car. Already there are aspirations to develop self-thinking robots that can write news, detect schizophrenia in patients, and approve loans, among other things. Is it exciting? Yes, of course it is; but scientists are worried about the unsaid implications of the growth. The MIT editorial says that “we [need to] find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur — and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.” In an effort to control these systems, some of the world’s largest technology firms have banded together to create an “A.I. ethics board.” As reported on DailyMail.co.uk, the companies involved are Amazon, DeepMind, Google, Facebook, IBM, and Microsoft. This coalition calls themselves the Partnership on Artificial Intelligence to Benefit People and Society and operate under eight ethics. The objective of this group is to ensure that advancements in technologies will empower as many people as possible, and be actively engaged in the development of A.I. so that each group is held accountable to their broad range of stakeholders. Just how far down the rabbit hole are we, as a society, planning on going? You can learn a little bit more when you visit Robotics.news. < Here >
  10. If science fiction has taught us anything it’s that artificial intelligence will one day lead to the downfall of the entirety of mankind. That day is (probably) still a long way away, if it ever actually happens, but for now we get to enjoy some of the nicer aspects of AI, such as its ability to write poetic masterpieces. Researchers in Australia in partnership with the University of Toronto have developed an algorithm capable of writing poetry. Far from your generic rhymes, this AI actually follows the rules, taking metre into account as it weaves its words. The AI is good. Really good. And it’s even capable of tricking humans into thinking that its poems were penned by a man instead of a machine. According to the researchers, the AI was trained extensively on the rules it needed to follow to craft an acceptable poem. It was fed nearly 3,000 sonnets as training, and the algorithm tore them apart to teach itself how the words worked with each other. Once the bot was brought up to speed it was tasked with crafting some poems of its own. Here’s a sample: With joyous gambols gay and still array No longer when he twas, while in his day At first to pass in all delightful ways Around him, charming and of all his days Not bad, huh? Of course, knowing that an AI made it might make it feel more stilted and dry than if you had read it without any preconceptions, but there’s no denying that it’s a fine poem. In fact, the poems written by the AI follow the rules of poetry even more closely than human poets like Shakespeare. I guess that’s the cold machine precision kicking in. When the bot’s verses were mixed with human-written poems, and then scoured by volunteers, the readers were split 50-50 over who wrote them. That’s a pretty solid vote of confidence in the AI’s favor, but there were still some things that gave the bot away, including errors in wording and grammar. Still, it’s a mighty impressive achievement. Perhaps when our robot overlords enslave humanity we’ll at least be treated to some nice poetry. < Here >
  11. In the year 2020, Tokyo will essentially be breaking a tech record when they become the very first Olympics to use facial recognition technology in an attempt to improve security. On Tuesday, the organizing committee announced that Tokyo will be utilizing the tech for indentifying officials, athletes, media, and staff at the 2020 Paralympics and Olympics Games. It will, however, not be utilized in the identification of any spectators who attend the events. The utilization of the facial recognition technology is an effort to both increase security and hasten the entrance of authorized individuals. The Japanese IT conglomerate, NEC Corp, is currently developing the system. The company will be collecting digital images of all authorized individuals in advance, and then storing them within a database so that cross-checking can be conducted at all entry points. The executive director of security for the Tokyo 2020 Games, Tsuyoshi Iwashita had this to say in the announcement on their blog post: “This latest technology will enable strict identification of accredited people compared with relying solely on the eyes of security staff, and also enables swift entry to venues which will be necessary in the intense heat of summer. I hope this will ensure a safe and secure Olympic and Paralympic Games and help athletes perform at their best.” New technology is often embraced by Olympic games. PyeongChang’s Winter Olympics in Korea used fuel-cell-powered Nexo SUVs for shuttling Olympic attendees around PyeongChang and Gangneung. Also, automatically inflated air bags were used by skiers as crash protection. Comment requests from NEC corps. and the Tokyo Organizing Committee were not immediately responded to. Could this be the first real step towards facial recognition being used as standard at large events? Let us know what you think about this article in the comments section below. < Here >
  12. The Department of Defense’s network is protected from malware threats by Sharkseer, one of the top National Security Agency or NSA cybersecurity programs. According to an agency spokeswoman, the DoD is transferring the Sharkseer program to the Defense Information Systems Agency as it aligns better with the DISA mission, according to NSA spokeswoman, Natalie Pittore. The transfer from NSA to DISA has been laid out in the in the National Defense Authorization Act. Back on July 23rd of this year, the NDAA was negotiated by Congress, but the actual hand-off seems to have been planned for a long time. According to congressional records, for numerous years, the top NSA officials have deemed the program as “among the highest cybersecurity initiatives “ Sharkseer utilizes artificial intelligence or AI to inspect incoming traffic and search for any possible vulnerabilities. The program, even at a basic level, examines all of the DoD incoming traffic for zero-day exploits and advanced threats on more personal levels. It also monitors documents, incoming traffic that could possibly infect the network, and emails. The program instantly automatically determines both the location and the identity of computer hosts that have either sent or received malware. Also, Sharkseer acts as a sort of “sandbox”, Pittore describes it as a government officials’ application used for testing files that look suspicious by utilizing automated behavior analysis. The DoD’s cybersecurity was criticized by Congress who accused it of being deployed in a “piece meal fashion.” However, they have actually given the Sharkseer program praise for apparently being a successful program and detecting more than 2 billion cyber events all across the DoD’s networks, both unclassified and classified according to a statement by Rep. Barbara Comstock, R-Va. Sharkseer seems to have escalated from a concept to an actual reality. This happened back in 2014, when $30 million of congressional funding was invested in the program. Congress has attempted to gain more funds for Sharkseer to help ensure fiscal years, but the amount that was eventually proportioned is unclear. Both houses of Congress still need to approve the NDAA and it also has to be signed off by the President, even though Sharkseer provision is not thought to be controversial. < Here >
  13. Doctors fed it hypothetical scenarios, not real patient data IBM’s Watson supercomputer gave unsafe recommendations for treating cancer patients, according to documents reviewed by Stat. The report is the latest sign that Watson, once hyped as the future of cancer research, has fallen far short of expectations. In 2012, doctors at Memorial Sloan Kettering Cancer Center partnered with IBM to train Watson to diagnose and treat patients. But according to IBM documents dated from last summer, the supercomputer has frequently given bad advice, like when it suggested a cancer patient with severe bleeding be given a drug that could cause the bleeding to worsen. (A spokesperson for Memorial Sloan Kettering said this suggestion was hypothetical and not inflicted on a real patient.) “This product is a piece of s—,” one doctor at Jupiter Hospital in Florida told IBM executives, according to the documents. “We bought it for marketing and with hopes that you would achieve the vision. We can’t use it for most cases.” The documents come from a presentation given by Andrew Norden, IBM Watson’s former deputy health chief, right before he left the company. In addition to showcasing customer dissatisfaction, they reveal problems with methods, too. Watson for Oncology was supposed to synthesize enormous amounts of data and come up with novel insights. But it turns out most of the data fed to it is hypothetical and not real patient data. That means the suggestions Watson made were simply based off the treatment preferences of the few doctors providing the data, not actual insights it gained from analyzing real cases. An IBM spokesperson told Gizmodo that Watson for Oncology has “supported care for more than 84,000 patients” and is still learning. Apparently, it’s not learning the right things. < Here >
  14. Software developed at the University of Bonn can accurately predict future actions Summary: Scientists have developed software that can look minutes into the future: The program learns the typical sequence of actions, such as cooking, from video sequences. Then it can predict in new situations what the chef will do at which point in time. Computer scientists from the University of Bonn have developed software that can look a few minutes into the future: The program first learns the typical sequence of actions, such as cooking, from video sequences. Based on this knowledge, it can then accurately predict in new situations what the chef will do at which point in time. Researchers will present their findings at the world's largest Conference on Computer Vision and Pattern Recognition, which will be held June 19-21 in Salt Lake City, USA. The perfect butler, as every fan of British social drama knows, has a special ability: He senses his employer's wishes before they have even been uttered. The working group of Prof. Dr. Jürgen Gall wants to teach computers something similar: "We want to predict the timing and duration of activities -- minutes or even hours before they happen," he explains. A kitchen robot, for example, could then pass the ingredients as soon as they are needed, pre-heat the oven in time -- and in the meantime warn the chef if he is about to forget a preparation step. The automatic vacuum cleaner meanwhile knows that it has no business in the kitchen at that time, and instead takes care of the living room. We humans are very good at anticipating the actions of others. For computers however, this discipline is still in its infancy. The researchers at the Institute of Computer Science at the University of Bonn are now able to announce a first success: They have developed self-learning software that can estimate the timing and duration of future activities with astonishing accuracy for periods of several minutes. Training data: four hours of salad videos The training data used by the scientists included 40 videos in which performers prepare different salads. Each of the recordings was around 6 minutes long and contained an average of 20 different actions. The videos also contained precise details of what time the action started and how long it took. The computer "watched" these salad videos totaling around four hours. This way, the algorithm learned which actions typically follow each other during this task and how long they last. This is by no means trivial: After all, every chef has his own approach. Additionally, the sequence may vary depending on the recipe. "Then we tested how successful the learning process was," explains Gall. "For this we confronted the software with videos that it had not seen before." At least the new short films fit into the context: They also showed the preparation of a salad. For the test, the computer was told what is shown in the first 20 or 30 percent of one of the new videos. On this basis it then had to predict what would happen during the rest of the film. That worked amazingly well. Gall: "Accuracy was over 40 percent for short forecast periods, but then dropped the more the algorithm had to look into the future." For activities that were more than three minutes in the future, the computer was still right in 15 percent of cases. However, the prognosis was only considered correct if both the activity and its timing were correctly predicted. Gall and his colleagues want the study to be understood only as a first step into the new field of activity prediction. Especially since the algorithm performs noticeably worse if it has to recognize on its own what happens in the first part of the video, instead of being told. Because this analysis is never 100 percent correct -- Gall speaks of "noisy" data. "Our process does work with it," he says. "But unfortunately nowhere near as well." Sample test videos and predictions derived from them are available at https://www.youtube.com/watch?v=xMNYRcVH_oI < Here >
  15. President Brad Smith warns authorities might track, investigate or arrest people based on flawed evidence Microsoft has called for facial recognition technology to be regulated by government, with for laws governing its acceptable uses. In a blog post on the company’s website on Friday, Microsoft president Brad Smith called for a congressional bipartisan “expert commission” to look into regulating the technology in the US. “It seems especially important to pursue thoughtful government regulation of facial recognition technology, given its broad societal ramifications and potential for abuse,” he wrote. “Without a thoughtful approach, public authorities may rely on flawed or biased technological approaches to decide who to track, investigate or even arrest for a crime.” Microsoft is the first big tech company to raise serious alarms about an increasingly sought-after technology for recognising a person’s face from a photo or through a camera. In May, US civil liberties groups had called on Amazon to stop offering facial recognition services to governments, warning that the software could be used to unfairly target immigrants and people of colour. While Smith called some uses for the technology positive and “potentially even profound” – such as finding a missing child or identifying a terrorist – he said other potential applications were “sobering”. “Imagine a government tracking everywhere you walked over the past month without your permission or knowledge,” he wrote. “Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. “Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first.” Smith said the need for government action “does not absolve technology companies of our own ethical responsibilities”. “We recognise that one of the difficult issues we’ll need to address is the distinction between the development of our facial recognition services and the use of our broader IT infrastructure by third parties that build and deploy their own facial recognition technology,” he wrote. He said Microsoft, which supplies face recognition to some businesses, has already rejected some customers’ requests to deploy the technology in situations involving “human rights risks”. A Microsoft spokeswoman declined to provide more details about what opportunities the company has passed over because of ethical concerns. Smith also defended the company’s contract with US Immigration and Customs Enforcement, saying it did not involve face recognition. < Here >
  16. System can track multiple people across scenes, despite changing camera angles BIG BLUE IBM has used its Watson artificial intelligence (AI) tech to develop a new algorithm for multi-face tracking. The system uses AI to track multiple individuals across scenes, despite changing camera angles, lighting, and appearances. Collaborating with Professor Ying Hung of the Department of Statistics and Biostatistics in Rutgers University, IBM Watson researcher Chung-Ching Lin led a team of scientists to develop the technology, using a method to spot different individuals in a video sequence. The system is also able to recognise if people leave and then re-enter the video, even if they look very different. To create this innovation in AI, Lin explained that the team first made 'tracklets' for the people present in the source material. "The tracklets are based on co-occurrence of multiple body parts (face, head and shoulders, upper body, and whole body), so that people can be tracked even when they are not fully in view of the camera - for example, their faces are turned away or occluded by other objects." Lin added: "We formulate the multi-person tracking problem as a graph structure with two types of edges." The first of these is 'spatial edges', which denote the connections of different body parts of a candidate within a frame and are used to generate the hypothesised state of a candidate. The second is 'temporal edges', which refer to the connections of the same body parts over adjacent frames and are used to estimate the state of each individual person in different frames. "We generate face tracklets using face-bounding boxes from each individual person's tracklets and extract facial feature for clustering," he added. To see how well the technology could perform, Lin and his team compared it against state-of-the-art methods in analysing challenging datasets of unconstrained videos. In one experiment, they used music videos, which feature high image quality but significant, rapid changes in the scene, camera setting, camera movement, makeup, and accessories, such as eyeglasses. "Our algorithm outperformed other methods with respect to both clustering accuracy and tracking," Lin added. "Clustering purity was substantially better with our algorithm compared with the other methods [and] automatically determined the number of people, or clusters, to be tracked without the need for manual video analysis." The algorithm and its performance are described in more detail in IBM's CVPR research paper, A Prior-Less Method for Multi-Face Tracking in Unconstrained Videos. < Here >
  17. Fast.ai - Part 1 - Lesson 1 - Annotated notes Building a world class image classifier with three lines of Python code The first lesson gives an introduction into the why and how of the fast.ai course, and you will learn the basics of Jupyter Notebooks and how to use the fast.ai library to build a world-class image classifier in three lines of Python. You will get a feel for what deep learning is and why it works, as well as possible applications you can build yourself. Introduction These are my annotated notes from the first lesson of the first part of the Fast.ai course. I’m taking the course for a second time, which means I’m re-watching the videos, reading the papers and making sure I am able to reproduce the code. As part of this process I’m writing down more detailed notes to help me better understand the material. Maybe they can be of help to you as well. Lesson takeaways By the end of the lesson you should know/understand the goal of fast.ai; how to use fast.ai; the “practical, top-down approach” of fast.ai; how to run Python code in Jupyter notebooks; how to apply the basic shortcuts in a Jupyter notebook; why we need GPU and where to access them; how to build your own image classifier using the fast.ai library; the basic building blocks of neural network; what deep learning is and why it works; how you can apply it yourself; the basics of how convolutional networks work; what gradient descent, a learning rate and an epoch are; how to set a good learning rate to train your model. Table of Contents TL;DR Introduction Lesson takeaways Table of Contents The goal of fast.ai The practical, top-down approach of fast.ai How to use fast.ai Fast.ai Part 1 course structure How to run Python code in Jupyter notebooks Jupyter Notebook basics Python 3 Jupyter Notebook shortcuts general shortcuts Code shortcuts Notebook configuration Reload extensions matplotlib inline Why we need a GPU and where to access them Options discussed in the lesson video Some more options My personal experience How to build your own classifier using the fast.ai library fast.ai library A labeled dataset Side note: Getting the data - PDL - Python Download Library Explore a data sample Train our first network architecture data learn learn.fit() Try it yourself What deep learning is and why it works Why we classify AlphaGo Fraud detection Artificial Intelligence > Machine Learning > Deep Learning Infinitely flexible function: A neural network All-purpose parameter fitting: gradient Descent Fast and scallable One more thing Putting it all together: examples of deep learning Digging a little deeper: The basics of Convolutional Networks Infinitely flexible function Gradient Descent and learning rate Putting it all together The goal for this lesson Notebooks used in the lesson Interesting links / Links from the lesson If interested, please take the course < here >.
  18. A video showing facial recognition software in use at the headquarters of the artificial intelligence company Megvii in Beijing.CreditGilles Sabrie for The New York Times ZHENGZHOU, China — In the Chinese city of Zhengzhou, a police officer wearing facial recognition glasses spotted a heroin smuggler at a train station. In Qingdao, a city famous for its German colonial heritage, cameras powered by artificial intelligence helped the police snatch two dozen criminal suspects in the midst of a big annual beer festival. In Wuhu, a fugitive murder suspect was identified by a camera as he bought food from a street vendor. With millions of cameras and billions of lines of code, China is building a high-tech authoritarian future. Beijing is embracing technologies like facial recognition and artificial intelligence to identify and track 1.4 billion people. It wants to assemble a vast and unprecedented national surveillance system, with crucial help from its thriving technology industry. “In the past, it was all about instinct,” said Shan Jun, the deputy chief of the police at the railway station in Zhengzhou, where the heroin smuggler was caught. “If you missed something, you missed it.” China is reversing the commonly held vision of technology as a great democratizer, bringing people more freedom and connecting them to the world. In China, it has brought control. In some cities, cameras scan train stations for China’s most wanted. Billboard-size displays show the faces of jaywalkers and list the names of people who can’t pay their debts. Facial recognition scanners guard the entrances to housing complexes. Already, China has an estimated 200 million surveillance cameras — four times as many as the United States. Such efforts supplement other systems that track internet use and communications, hotel stays, train and plane trips and even car travel in some places. Even so, China’s ambitions outstrip its abilities. Technology in place at one train station or crosswalk may be lacking in another city, or even the next block over. Bureaucratic inefficiencies prevent the creation of a nationwide network. For the Communist Party, that may not matter. Far from hiding their efforts, Chinese authorities regularly state, and overstate, their capabilities. In China, even the perception of surveillance can keep the public in line. Some places are further along than others. Invasive mass-surveillance software has been set up in the west to track members of the Uighur Muslim minority and map their relations with friends and family, according to software viewed by The New York Times. “This is potentially a totally new way for the government to manage the economy and society,” said Martin Chorzempa, a fellow at the Peterson Institute for International Economics. “The goal is algorithmic governance,” he added. The Shame Game The intersection south of Changhong Bridge in the city of Xiangyang used to be a nightmare. Cars drove fast and jaywalkers darted into the street. Then last summer, the police put up cameras linked to facial recognition technology and a big, outdoor screen. Photos of lawbreakers were displayed alongside their names and government I.D. numbers. People were initially excited to see their faces on the board, said Guan Yue, a spokeswoman, until propaganda outlets told them it was punishment. “If you are captured by the system and you don’t see it, your neighbors or colleagues will, and they will gossip about it,” she said. “That’s too embarrassing for people to take.” China’s new surveillance is based on an old idea: Only strong authority can bring order to a turbulent country. Mao Zedong took that philosophy to devastating ends, as his top-down rule brought famine and then the Cultural Revolution. His successors also craved order but feared the consequences of totalitarian rule. They formed a new understanding with the Chinese people. In exchange for political impotence, they would be mostly left alone and allowed to get rich. It worked. Censorship and police powers remained strong, but China’s people still found more freedom. That new attitude helped usher in decades of breakneck economic growth. Today, that unwritten agreement is breaking down. China’s economy isn’t growing at the same pace. It suffers from a severe wealth gap. After four decades of fatter paychecks and better living, its people have higher expectations. Xi Jinping, China’s top leader, has moved to solidify his power. Changes to Chinese law mean he could rule longer than any leader since Mao. And he has undertaken a broad corruption crackdown that could make him plenty of enemies. For support, he has turned to the Mao-era beliefs in the importance of a cult of personality and the role of the Communist Party in everyday life. Technology gives him the power to make it happen. “ Reform and opening has already failed, but no one dares to say it,” said Chinese historian Zhang Lifan, citing China’s four-decade post-Mao policy. “The current system has created severe social and economic segregation. So now the rulers use the taxpayers’ money to monitor the taxpayers.” Mr. Xi has launched a major upgrade of the Chinese surveillance state. China has become the world’s biggest market for security and surveillance technology, with analysts estimating the country will have almost 300 million cameras installed by 2020. Chinese buyers will snap up more than three-quarters of all servers designed to scan video footage for faces, predicts IHS Markit, a research firm. China’s police will spend an additional $30 billion in the coming years on techno-enabled snooping, according to one expert quoted in state media. Government contracts are fueling research and development into technologies that track faces, clothing and even a person’s gait. Experimental gadgets, like facial-recognition glasses, have begun to appear. Judging public Chinese reaction can be difficult in a country where the news media is controlled by the government. Still, so far the average Chinese citizen appears to show little concern. Erratic enforcement of laws against everything from speeding to assault means the long arm of China’s authoritarian government can feel remote from everyday life. As a result, many cheer on new attempts at law and order. “It’s one of the biggest intersections in the city,” said Wang Fukang, a college student who volunteered as a guard at the crosswalk in Xiangyang. “It’s important that it stays safe and orderly.” The Surveillance Start-Up Start-ups often make a point of insisting their employees use their technology. In Shanghai, a company called Yitu has taken that to the extreme. The halls of its offices are dotted with cameras, looking for faces. From desk to break room to exit, employees’ paths are traced on a television screen with blue dotted lines. The monitor shows their comings and goings, all day, everyday. In China, snooping is becoming big business. As the country spends heavily on surveillance, a new generation of start-ups have risen to meet the demand. Chinese companies are developing globally competitive applications like image and voice recognition. Yitu took first place in a 2017 open contest for facial recognition algorithms held by the United States government’s Office of the Director of National Intelligence. A number of other Chinese companies also scored well. A technology boom in China is helping the government’s surveillance ambitions. In sheer scale and investment, China already rivals Silicon Valley. Between the government and eager investors, surveillance start-ups have access to plenty of money and other resources. In May, the upstart A.I. company SenseTime raised $620 million, giving it a valuation of about $4.5 billion. Yitu raised $200 million last month. Another rival, Megvii, raised $460 million from investors that included a state-backed fund created by China’s top leadership. At a conference in May at an upscale hotel in Beijing, China’s security-industrial complex offered its vision of the future. Companies big and small showed off facial-recognition security gates and systems that track cars around cities to local government officials, tech executives and investors. Private companies see big potential in China’s surveillance build-out. China’s public security market was valued at more than $80 billion last year but could be worth even more as the country builds its capabilities, said Shen Xinyang, a former Google data scientist who is now chief technology officer of Eyecool, a start-up. “Artificial intelligence for public security is actually still a very insignificant portion of the whole market,” he said, pointing out that most equipment currently in use was “nonintelligent.” Many of these businesses are already providing data to the government. Mr. Shen told the group that his company had surveillance systems at more than 20 airports and train stations, which had helped catch 1,000 criminals. Eyecool, he said, is also handing over two million facial images each day to a burgeoning big-data police system called Skynet. At a building complex in Xiangyang, a facial-recognition system set up to let residents quickly through security gates adds to the police’s collection of photos of local residents, according to local Chinese Communist Party officials. Wen Yangli, an executive at Number 1 Community, which makes the product, said the company is at work on other applications. One would detect when crowds of people are clashing. Another would allow police to use virtual maps of buildings to find out who lives where. China’s surveillance companies are also looking to test the appetite for high-tech surveillance abroad. Yitu says it has been expanding overseas, with plans to increase business in regions like Southeast Asia and the Middle East. At home, China is preparing its people for next-level surveillance technology. A recent state-media propaganda film called “Amazing China” showed off a similar virtual map that provided police with records of utility use, saying it could be used for predictive policing. “If there are anomalies, the system sends an alert,” a narrator says, as Chinese police officers pay a visit to an apartment with a record of erratic utility use. The film then quotes one of the officers: “No matter which corner you escape to, we’ll bring you to justice.” Enter the Panopticon For technology to be effective, it doesn’t always have to work. Take China’s facial-recognition glasses. Police in the central Chinese city of Zhengzhou recently showed off the specs at a high-speed rail station for state media and others. They snapped photos of a policewoman peering from behind the shaded lenses. But the glasses work only if the target stands still for several seconds. They have been used mostly to check travelers for fake identifications. China’s national database of individuals it has flagged for watching — including suspected terrorists, criminals, drug traffickers, political activists and others — includes 20 million to 30 million people, said one technology executive who works closely with the government. That is too many people for today’s facial recognition technology to parse, said the executive, who asked not to be identified because the information wasn’t public. The system remains more of a digital patchwork than an all-seeing technological network. Many files still aren’t digitized, and others are on mismatched spreadsheets that can’t be easily reconciled. Systems that police hope will someday be powered by A.I. are currently run by teams of people sorting through photos and data the old-fashioned way. Take, for example, the crosswalk in Xiangyang. The images don’t appear instantaneously. The billboard often shows jaywalkers from weeks ago, though recently authorities have reduced the lag to about five or six days. Officials said humans still sift through the images to match them to people’s identities. Still, Chinese authorities who are generally mum about security have embarked on a campaign to convince the country’s people that the high-tech security state is already in place. China’s propagandists are fond of stories in which police use facial recognition to spot wanted criminals at events. An article in the People’s Daily, the Communist Party’s official newspaper, covered a series of arrests made with the aid of facial recognition at concerts of the pop star Jackie Cheung. The piece referenced some of the singer’s lyrics: “You are a boundless net of love that easily trapped me.” In many places, it works. At the intersection in Xiangyang, jaywalking has decreased. At the building complex where Number 1 Community’s facial-recognition gate system has been installed, a problem with bike theft ceased entirely, according to building management. “The whole point is that people don’t know if they’re being monitored, and that uncertainty makes people more obedient,” said Mr. Chorzempa, the Peterson Institute fellow. He described the approach as a panopticon, the idea that people will follow the rules precisely because they don’t know whether they are being watched. In Zhengzhou, police were happy to explain how just the thought of the facial recognition glasses could get criminals to confess. Mr. Shan, the Zhengzhou railway station deputy police chief, cited the time his department grabbed a heroin smuggler. While questioning the suspect, Mr. Shan said, police pulled out the glasses and told the man that what he said didn’t matter. The glasses could give them all the information they needed. “Because he was afraid of being found out by the advanced technology, he confessed,” said Mr. Shan, adding that the suspect had swallowed 60 small packs of heroin. “We didn’t even use any interrogation techniques,” Mr. Shan said. “He simply gave it all up.” < Here >
  19. Using what one expert calls a ‘Wizard of Oz technique’, some companies keep their reliance on humans a secret from investors It’s hard to build a service powered by artificial intelligence. So hard, in fact, that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans. “Using a human to do the job lets you skip over a load of technical and business development challenges. It doesn’t scale, obviously, but it allows you to build something and skip the hard part early on,” said Gregory Koberger, CEO of ReadMe, who says he has come across a lot of “pseudo-AIs”. “It’s essentially prototyping the AI with human beings,” he said. This practice was brought to the fore this week in a Wall Street Journal article highlighting the hundreds of third-party app developers that Google allows to access people’s inboxes. In the case of the San Jose-based company Edison Software, artificial intelligence engineers went through the personal email messages of hundreds of users – with their identities redacted – to improve a “smart replies” feature. The company did not mention that humans would view users’ emails in its privacy policy. The third parties highlighted in the WSJ article are far from the first ones to do it. In 2008, Spinvox, a company that converted voicemails into text messages, was accused of using humans in overseas call centres rather than machines to do its work. In 2016, Bloomberg highlighted the plight of the humans spending 12 hours a day pretending to be chatbots for calendar scheduling services such as X.ai and Clara. The job was so mind-numbing that human employees said they were looking forward to being replaced by bots. In 2017, the business expense management app Expensify admitted that it had been using humans to transcribe at least some of the receipts it claimed to process using its “smartscan technology”. Scans of the receipts were being posted to Amazon’s Mechanical Turk crowdsourced labour tool, where low-paid workers were reading and transcribing them. “I wonder if Expensify SmartScan users know MTurk workers enter their receipts,” said Rochelle LaPlante, a “Turker” and advocate for gig economy workers on Twitter. “I’m looking at someone’s Uber receipt with their full name, pick-up and drop-off addresses.” Even Facebook, which has invested heavily in AI, relied on humans for its virtual assistant for Messenger, M. In some cases, humans are used to train the AI system and improve its accuracy. A company called Scale offers a bank of human workers to provide training data for self-driving cars and other AI-powered systems. “Scalers” will, for example, look at camera or sensor feeds and label cars, pedestrians and cyclists in the frame. With enough of this human calibration, the AI will learn to recognise these objects itself. In other cases, companies fake it until they make it, telling investors and users they have developed a scalable AI technology while secretly relying on human intelligence. Alison Darcy, a psychologist and founder of Woebot, a mental health support chatbot, describes this as the “Wizard of Oz design technique”. “You simulate what the ultimate experience of something is going to be. And a lot of time when it comes to AI, there is a person behind the curtain rather than an algorithm,” she said, adding that building a good AI system required a “ton of data” and that sometimes designers wanted to know if there was sufficient demand for a service before making the investment. This approach was not appropriate in the case of a psychological support service like Woebot, she said. “As psychologists we are guided by a code of ethics. Not deceiving people is very clearly one of those ethical principles.” Research has shown that people tend to disclose more when they think they are talking to a machine, rather than a person, because of the stigma associated with seeking help for one’s mental health. A team from the University of Southern California tested this with a virtual therapist called Ellie. They found that veterans with post-traumatic stress disorder were more likely to divulge their symptoms when they knew that Ellie was an AI system versus when they were told there was a human operating the machine. Others think companies should always be transparent about how their services operate. “I don’t like it,” said LaPlante of companies that pretend to offer AI-powered services but actually employ humans. “It feels dishonest and deceptive to me, neither of which is something I’d want from a business I’m using. “And on the worker side, it feels like we’re being pushed behind a curtain. I don’t like my labour being used by a company that will turn around and lie to their customers about what’s really happening.” This ethical quandary also raises its head with AI systems that pretend to be human. One recent example of this is Google Duplex, a robot assistant that makes eerily lifelike phone calls complete with “ums” and “ers” to book appointments and make reservations. After an initial backlash, Google said its AI would identify itself to the humans it spoke to. “In their demo version, it feels marginally deceptive in a low-impact conversation,” said Darcy. Although booking a table at a restaurant might seem like a low-stakes interaction, the same technology could be much more manipulative in the wrong hands. What would happen if you could make lifelike calls simulating the voice of a celebrity or politician, for example? “There’s already major fear around AI and it’s not really helping the conversation when there’s a lack of transparency,” Darcy said. < Here >
  20. An AI has defeated China’s top doctors when competing on detection of a brain tumors and predicting the hematoma expansion. The system defeated a team of doctors who were top 15 in the nation. The AI is named BioMind and it was developed by the Artificial Intelligence Research Centre for Neurological Disorders at Tiantan Hospital in Beijing. BioMind was correct 87% of the time while the doctors have reached 66% accuracy. While the AI took just 15 minutes to find 225 cases the doctors took double the time. The prediction of hematoma was successful with the accuracy of 83% while the doctors have scored only 63%. The AI researchers have trained the AI using thousands of images from Beijing Tiantan Hospital’s archives which gives it a similar level of ‘experience’ in identifying the neurological diseases as the most senior doctors. Wang Yongjun, executive vice president of the Beijing Tiantan Hospital advised that Xinhua didn’t actually care about the battle between doctors and AI as the aim was to show potential for collaboration in the future. “I hope through this competition, doctors can experience the power of artificial intelligence,” he said. “This is especially so for some doctors who are skeptical about artificial intelligence. I hope they can further understand AI and eliminate their fears toward it.” “It will be like a GPS guiding a car. It will make proposals to a doctor and help the doctor diagnose,” he said. “But it will be the doctor who ultimately decides, as there are a number of factors that a machine cannot take into consideration, such as a patient’s state of health and family situation.” This could be the biggest step forward in AI technology in improving healthcare so far; however only time will tell how patients will react to AI involvement in their care. < Here >
  21. A boom in artificial intelligence research has drawn the tech industry’s biggest companies and their checkbooks to the storied English city. CAMBRIDGE, England — When you step off the train here and walk into the city square outside the railway station, you will not see the spires of King’s College Chapel or the turrets atop the Trinity Great Court. The University of Cambridge is still a cab ride away. But you will see a stone and glass office building with a rooftop patio. This is where Amazon designs its flying drones. Just down the block, inside a stone building of its own, Microsoft is designing some sort of computer chip for artificial intelligence. And if you keep walking, you will soon reach a third building, marked with a powder-blue Apple logo, where engineers are pushing the boundaries of Siri, the talking digital assistant included with every iPhone. For years, journalists, city planners and other government officials have called this “Silicon Fen,” envisioning the once sleepy outskirts of Cambridge as Britain’s answer to Silicon Valley. The name — a nod to the coastal plain, or Fenlands, that surrounds Cambridge — never quite stuck. But the concept certainly did, so much so that the world’s tech powers have moved in, snapping up engineers and researchers, particularly in the burgeoning field of artificial intelligence. Their arrival provides a welcome fillip for the British economy that’s expected to be bruised by its departure from the European Union. Apple, Amazon and Google established research and engineering hubs in Britain by acquiring companies that emerged from local universities, spending millions or even hundreds of millions of dollars. There are more than 4,500 high-tech firms in Cambridge, employing nearly 75,000 people, many of them commuters from other communities, according to Cambridge Network, a city business group. Just across the street from Amazon’s Cambridge headquarters, ARM, the computer chip company owned by SoftBank, the Japanese tech giant, recently moved a team of engineers into a row of temporary offices. And a new building is going up just yards away. This is where Samsung, the South Korean tech conglomerate, will soon open another artificial intelligence lab, hiring as many as 150 researchers, engineers and other staff. “For anybody who hasn’t been here for 20 years, they may say: ‘Is this the same place?’” said Claire Ruskin, the chief executive of Cambridge Network, as she drove through the city on a recent afternoon. But those buildings outside the train station are reminders that Britain — like Europe as a whole — does not have its own internet powerhouse, a corporate power capable of pushing the world in new technical, cultural and political directions. The closest match was ARM, and that was acquired by SoftBank in 2016. In London, a 45-minute train ride from Cambridge, you will find DeepMind, perhaps the world’s leading A.I. lab. DeepMind is at the forefront of a technological revolution that many believe will shift economic and societal norms across the globe, and it was acquired by Google in 2014. “We welcome the big existing companies,” said Matthew Hancock, the British secretary of state who oversees digital policy. “But we’re incredibly determined to ensure that the next generation of companies are built here.” On a recent Friday morning, Chris Bishop, who oversees Microsoft Research Cambridge, looked out his fifth-floor office window, with its panoramic view of Cambridge, and pointed to the spires of King’s College Chapel rising over the trees in the distance. “Alan Turing was at King’s,” he said. In 1950, with his essay “Computing Machinery and Intelligence,” Turing, the British mathematician, codebreaker and computing pioneer, asked whether machines would ever think on their own. Mr. Bishop, an A.I. researcher who studied at Oxford and took a professorship at the University of Edinburgh before moving to Cambridge, views his work as another link in a long British legacy. Mr. Bishop joined the lab in 1997, just after it was founded. In those days, Microsoft was the one tech giant paying big money to lure top academics into this kind of corporate research. Now, as artificial intelligence takes center stage at leading tech companies, paying big dollars for academics is common. Five years ago, Microsoft moved its lab to the city-block-size building near the rail station. Many of Mr. Bishop’s former students and colleagues now work at other big tech companies. Neil Lawrence, a University of Sheffield professor who studied with Mr. Bishop at Cambridge, now works at the new Amazon Cambridge Development Center just down the street. Two prominent A.I. researchers who worked under Mr. Bishop at Microsoft have since moved to Google and DeepMind. Many of these researchers, like a number of other top A.I. researchers in Britain, were born outside the country. Still, local policymakers are concerned about local talent moving into foreign companies. “We have some of the top A.I. researchers in the world in the U.K.,” said Dame Wendy Hall, a computer science professor at the University of Southampton. “How do we stop the A.I. brain drain to the U.S. — or to the U.S. companies anyway?” Last year, the British government commissioned a report on the country’s A.I. landscape from Dame Hall and Jerome Pesenti, the chief executive of BenevolentAI, an artificial intelligence start-up based in London. Within weeks of the report’s release, Mr. Pesenti moved to Facebook. He is now vice president of artificial intelligence in the company’s New York office. “It does illustrate the point,” Dame Hall said. “Once your head is above the parapet in this world, you draw interest, particularly from the big Silicon Valley giants.” The report called for increased financing for universities, and in the months following the government responded, saying it would fund 200 new Ph.D.s in artificial intelligence and related fields by 2020 and invest a total of $500 million in math, digital and technical education across Britain. In Cambridge, there are bigger questions about the boundaries between academia and industry. Even those who have prospered financially from the dynamic aren’t sure where to draw the line. Zoubin Ghahramani, a Cambridge professor who sold a start-up to Uber and is now the company’s chief scientist while still maintaining his ties with the university, worries about a brain drain from Europe in artificial intelligence. He has called for the creation of a European research institute to recruit people in the region who may otherwise go work for a Silicon Valley firm. His colleague at Cambridge Steve Young, a respected speech-recognition researcher who has sold companies to Microsoft, Google and Apple, noted that it was “almost impossible” for the university to compete for staff against tech companies, limiting who will teach the next generation of students. “That could have some very severe consequences,” he said. His comment came with a laugh. Mr. Young splits time between the university and Apple, where an important part of his job is recruitment for the company. “I don’t recruit from Cambridge,” he joked. Vishal Chatrath was the first employee and the chief business officer at VocalIQ, a Cambridge speech technology start-up that Apple acquired in late 2015 and transformed into a local Siri development center. Now, just two blocks from Apple’s Cambridge outpost, he oversees a new start-up called Prowler, which aims to automate business decisions that are typically made by humans. For Mr. Chatrath, Prowler shows how acquisitions by foreign companies can spur the creation of new start-ups. A second VocalIQ employee left and recently founded a start-up called PolyAI, which is trying to build truly conversational computing systems. “A lot of capital is now flowing in Cambridge, and that capital helps push the next wave of entrepreneurs,” Mr. Chatrath said. For others, the question is whether start-ups like this will evolve into vibrant companies — or just disappear into a company like Apple or Google. The big American companies are also attracted by the salaries they can pay here. According to the recruitment website Hired, the average tech salary in London is $78,000 a year, versus $142,000 in Silicon Valley. “It remains one of the huge competitive advantages that you can get the same, or better, talent for cheaper and less churn,” said Matt Clifford, a co-founder of Entrepreneur First, a start-up incubator in London that recruits students from Cambridge and Oxford. Entrepreneur First helped create Magic Pony, yet another A.I. company, which Twitter acquired for $150 million in 2016. But some wonder whether these companies could better serve Britain by staying independent. Ian Hogarth trained as a machine learning researcher at Cambridge, founded the live music app Songkick and is now an angel investor in Britain. He argued that if DeepMind had remained an independent, it may have grown into the country’s first tech superpower. Following a similar path were start-ups like VocalIQ (acquired by Apple) and Evi, the company that Amazon acquired in 2013 as part of its effort to build the Alexa digital assistant. Evi was the foundation for Amazon’s Cambridge operation. Many have applauded the enormous economic change these acquisitions are helping to drive in London and Cambridge. But not everyone is clapping. Last year, in Cambridge, a new housing development was vandalized with graffiti written in Latin: “Locus in Domos Loci Populum.” As the BBC reported, this translates to “local homes for local people.” As the tech workers land the big salaries, home prices are skyrocketing, and the locals are being squeezed out. It is yet another example of Silicon Fen’s looking a lot like Silicon Valley. < Here >
  22. Too much time discussing whether robots can take your job; not enough time discussing what happens next Are we focusing too much on analyzing exactly how many jobs could be destroyed by the coming wave of automation, and not enough on how to actually fix the problem? That’s one conclusion in a new paper on the potential effects of robotics and AI on global labor markets from US think tank, the Center for Global Development (CGD). The paper’s authors, Lukas Schlogl and Andy Sumner, say it’s impossible to know exactly how many jobs will be destroyed or disrupted by new technology. But, they add, it’s fairly certain there are going to be significant effects — especially in developing economies, where the labor market is skewed toward work that requires the sort of routine, manual labor that’s so susceptible to automation. Think unskilled jobs in factories or agriculture. As earlier studies have also suggested, Schlogl and Sumner think the effects of automation on these and other nations is not likely to be mass unemployment, but the stagnation of wages and polarization of the labor market. In other words, there will still be work for most people, but it’ll be increasingly low-paid and unstable; without benefits such as paid vacation, health insurance, or pensions. On the other end of the employment spectrum, meanwhile, there will continue to be a small number of rich and super-rich individuals who reap the benefits of increased productivity created by technology. These changes will likely mean a decline in job security and standards of living for many, which in turn could lead to political dissatisfaction. (Some suggest we’ve already seen the early impact of this, with US cities where jobs are at risk of automation more likely to vote Republican.) Schlogl and Sumner give an overview of proposed solutions to these challenges, but seem skeptical that any go far enough. One class of solution they call “quasi-Luddite” — measures that try to stall or reverse the trend of automation. These include taxes on goods made with robots (or taxes on the robots themselves) and regulations that make it difficult to automate existing jobs. They suggest that these measures are challenging to implement in “an open economy,” because if automation makes for cheaper goods or services, then customers will naturally look for them elsewhere; i.e. outside the area covered by such regulations. A related strategy is to reduce the cost of human labor, by driving down wages or cutting benefits, for example. “The question is how desirable and politically feasible such strategies are,” say Schlogl and Sumner, which is a nice way of saying “it’s not clear how much you can hurt people before they riot in the streets.” Taxes on robots or goods produced by robots are “quasi-Luddite,” say Schlogl and Sumner. The other class of solution they call “coping strategies,” which tend to focus on one of two things: re-skilling workers whose jobs are threatened by automation or providing economic safety nets to those affected (for example, a universal basic income or UBI). Schlogl and Sumner suggest that the problem with retraining workers is that it’s not clear what new skills will be “automation-resistant for a sufficient time” or whether it’s even worth the money to retrain someone in the middle of their working life. (Retraining is also more expensive and challenging for developing countries where there’s less infrastructure for tertiary education.) As for economic safety nets like UBI, they suggest these might not even be possible in developing countries. That’s because they presuppose the existence of prosperous jobs somewhere in the economy from which profits can be skimmed and redistributed. They also note that such UBI-related schemes might raise the cost of labor, which in turn would encourage more jobs to be substituted with technology. All this leads the pair to conclude that there’s simply not enough work being done researching the political and economic solutions to what could be a growing global crisis. “Questions like profitability, labor regulations, unionization, and corporate-social expectations will be at least as important as technical constraints in determining which jobs get automated,” they write. And do Schlogl and Sumner propose any of their own solutions? They write: “In the long term, utopian as it may seem now, [there is a] moral case for a global UBI-style redistribution framework financed by profits from ... high-income countries.” Now that would certainly get the anti-globalist crowd incensed, and the pair admit that it’s “difficult to see how such a framework would be politically enacted.” Back to the drawing board then. Source
  23. Adobe Research has been getting busy nailing down how to spot image manipulations by unleashing AI on the case. In doing so, they may be achieving real headway in the field of image forensics. You can check out the paper, "Learning Rich Features for Image Manipulation Detection," by authors whose affiliations include Adobe Research and University of Maryland, College Park. The paper should be seen by fakers who think they can get away with flaunting their tricks because Adobe's scientists are eager to get and stay on your case. Senior research scientist Vlad Morariu, for example, set out on a quest to solve the problem on how to detect images that have been subject to tampering. Morariu is no stranger to the task. In 2016, he took up a challenge of detecting image manipulation as part of the DARPA Media Forensics program. How can you detect if a picture is authentic or has been manipulated? In this case, he and his colleagues watched out for manipulation via three types of operations. Splicing [parts of two different images are combined], cloning [when you move an object from one pace to another] and removal. [In the latter, you remove an object—and the space can be filled in.] First, let's hear some noise. "Every image has it own imperceptible noise statistics. When you manipulate an image, you actually move the noise statistics along with the content." A posting in the Adobe Blog also carried his remarks about what we can know about manipulation. "File formats contain metadata that can be used to store information about how the image was captured and manipulated. Forensic tools can be used to detect manipulation by examining the noise distribution, strong edges, lighting and other pixel values of a photo. Watermarks can be used to establish original creation of an image." Even though the human eye may be unable to spot the artifacts, detection is possible by close analysis at the pixel level, said Adobe, or by applying filters that help highlight changes. Not all such tools, however, work perfectly to discover tampering. Enter artificial intelligence and machine learning—and they entered Vlad's head, as potentially reliable paths to identify a modified image. Can AI not only spot manipulation but also identify the type of manipulation used and highlight the specific area of the photograph that was altered? To get answers, he and team trained a deep learning neural network to recognize image manipulation. Two techniques were tried, (1) an RGB stream (changes to red, green and blue color values of pixels) to detect tampering and (2) use of a noise stream filter. Results? The authors said in their paper that "Experiments on standard datasets show that our method not only detects tampering artifacts but also distinguishes between various tampering techniques. More features, including JPEG compression, will be explored in the future." The Adobe Blog reminds us that digital image manipulation is a technology that "can be used for both the best and the worst of our imaginations." Why this research matters: Techniques used provide more possibility and more options for managing the impact of digital manipulation, and they potentially answer questions of authenticity more effectively, said the Adobe Blog. Paul Lilly weighed in at HotHardware: "It's not a perfect system, but it's nice to see companies like Adobe working on ways to separate fact from fiction in photography." < Here >
  24. A machine learned to teach itself to solve a Rubik’s Cube without human assistance, according to a group of UC Irvine researchers. Two algorithms, collectively called Deep Cube, typically can solve the 3-D combination puzzle within 30 moves, which is less than or equal to systems that use human knowledge, according to the research paper. Less than 5.8% of the world’s population can solve the Rubik’s Cube, according to the Rubik’s website. “At first I didn't think it was possible to solve the Rubik’s Cube without any human data or knowledge,” said Stephen McAleer, a UCI PhD student. The trick, he said, is to present an advanced computer with a solved Rubik’s Cube and let it descramble the puzzle bit by bit. The researchers call this algorithm “autodidactic iteration,” in which the machine works backward to teach itself the moves that solve the puzzle. In the second algorithm, the trained neural networks use the moves learned in the first algorithm to solve the cube. The machine plays with the puzzle and learns how to solve it from any starting point, McAleer said. McAleer and his team, which includes one professor and two other students, submitted their research in May for consideration for publication at the Conference on Neural Information Processing Systems later this year. If the group used a reinforcement learning approach, in which the machine was rewarded for every step it took that brought it closer to solving the puzzle, it would be “impossible for neural networks to know when it’s in a good or bad state,” McAleer said. The group was inspired by a research paper that used neural networks and an advanced search method called the “Monte Carlo tree search” to teach artificial intelligence to play the strategy board game Go. One reason the group thought its research would be “very interesting” to pursue is that it would push artificial intelligence to go beyond pattern recognition and to reason about problems, McAleer said. “In order to solve the Rubik’s Cube, this artificial intelligence has to reason symbolically,” he said. “It has to think about how it’s going to manipulate this mathematical structure.” McAleer said the next step is to see how the research could be applied in biology, such as in protein folding, the process by which a protein structure assumes its functional shape or conformation. < Here >
  25. AI advances by the 'Medical Brain' team could help the internet giant finally break into the health-care business A woman with late-stage breast cancer came to a city hospital, fluids already flooding her lungs. She saw two doctors and got a radiology scan. The hospital’s computers read her vital signs and estimated a 9.3 percent chance she would die during her stay. Then came Google’s turn. An new type of algorithm created by the company read up on the woman -- 175,639 data points -- and rendered its assessment of her death risk: 19.9 percent. She passed away in a matter of days. The harrowing account of the unidentified woman’s death was published by Google in May in research highlighting the health-care potential of neural networks, a form of artificial intelligence software that’s particularly good at using data to automatically learn and improve. Google had created a tool that could forecast a host of patient outcomes, including how long people may stay in hospitals, their odds of re-admission and chances they will soon die. What impressed medical experts most was Google’s ability to sift through data previously out of reach: notes buried in PDFs or scribbled on old charts. The neural net gobbled up all this unruly information then spat out predictions. And it did it far faster and more accurately than existing techniques. Google’s system even showed which records led it to conclusions. Hospitals, doctors and other health-care providers have been trying for years to better use stockpiles of electronic health records and other patient data. More information shared and highlighted at the right time could save lives -- and at the very least help medical workers spend less time on paperwork and more time on patient care. But current methods of mining health data are costly, cumbersome and time consuming. As much as 80 percent of the time spent on today’s predictive models goes to the “scut work” of making the data presentable, said Nigam Shah, an associate professor at Stanford University, who co-authored Google’s research paper, published in the journal Nature. Google’s approach avoids this. "You can throw in the kitchen sink and not have to worry about it,” Shah said. Google’s next step is moving this predictive system into clinics, AI chief Jeff Dean told Bloomberg News in May. Dean’s health research unit -- sometimes referred to as Medical Brain -- is working on a slew of AI tools that can predict symptoms and disease with a level of accuracy that is being met with hope as well as alarm. Inside the company, there’s a lot of excitement about the initiative. "They’ve finally found a new application for AI that has commercial promise," one Googler says. Since Alphabet Inc.’s Google declared itself an “AI-first” company in 2016, much of its work in this area has gone to improve existing internet services. The advances coming from the Medical Brain team give Google the chance to break into a brand new market -- something co-founders Larry Page and Sergey Brin have tried over and over again. Software in health care is largely coded by hand these days. In contrast, Google’s approach, where machines learn to parse data on their own, “can just leapfrog everything else,” said Vik Bajaj, a former executive at Verily, an Alphabet health-care arm, and managing director of investment firm Foresite Capital. “They understand what problems are worth solving," he said. "They’ve now done enough small experiments to know exactly what the fruitful directions are.” Dean envisions the AI system steering doctors toward certain medications and diagnoses. Another Google researcher said existing models miss obvious medical events, including whether a patient had prior surgery. The person described existing hand-coded models as “an obvious, gigantic roadblock” in health care. The person asked not to be identified discussing work in progress. For all the optimism over Google’s potential, harnessing AI to improve health-care outcomes remains a huge challenge. Other companies, notably IBM’s Watson unit, have tried to apply AI to medicine but have had limited success saving money and integrating the technology into reimbursement systems. Google has long sought access to digital medical records, also with mixed results. For its recent research, the internet giant cut deals with the University of California, San Francisco, and the University of Chicago for 46 billion pieces of anonymous patient data. Google’s AI system created predictive models for each hospital, not one that parses data across the two, a harder problem. A solution for all hospitals would be even more challenging. Google is working to secure new partners for access to more records. A deeper dive into health would only add to the vast amounts of information Google already has on us. "Companies like Google and other tech giants are going to have a unique, almost monopolistic, ability to capitalize on all the data we generate," said Andrew Burt, chief privacy officer for data company Immuta. He and pediatric oncologist Samuel Volchenboum wrote a recent column arguing governments should prevent this data from becoming "the province of only a few companies," like in online advertising where Google reigns. Google is treading carefully when it comes to patient information, particularly as public scrutiny over data-collection rises. Last year, British regulators slapped DeepMind, another Alphabet AI lab, for testing an app that analyzed public medical records without telling patients that their information would be used like this. With the latest study, Google and its hospital partners insist their data is anonymous, secure and used with patient permission. Volchenboum said the company may have a more difficult time maintaining that data rigor if it expands to smaller hospitals and health-care networks. Still, Volchenboum believes these algorithms could save lives and money. He hopes health records will be mixed with a sea of other stats. Eventually, AI models could include information on local weather and traffic -- other factors that influence patient outcomes. "It’s almost like the hospital is an organism," he said. Few companies are better poised to analyze this organism than Google. The company and its Alphabet cousin, Verily, are developing devices to track far more biological signals. Even if consumers don’t take up wearable health trackers en masse, Google has plenty of other data wells to tap. It knows the weather and traffic. Google’s Android phones track things like how people walk, valuable information for measuring mental decline and some other ailments. All that could be thrown into the medical algorithmic soup. Medical records are just part of Google’s AI health-care plans. Its Medical Brain has unfurled AI systems for radiology, ophthalmology and cardiology. They’re flirting with dermatology, too. Staff created an app for spotting malignant skin lesions; a product manager walks around the office with 15 fake tattoos on her arms to test it. Dean, the AI boss, stresses this experimentation relies on serious medical counsel, not just curious software coders. Google is starting a new trial in India that uses its AI software to screen images of eyes for early signs of a condition called diabetic retinopathy. Before releasing it, Google had three retinal specialists furiously debate the early research results, Dean said. Over time, Google could license these systems to clinics, or sell them through the company’s cloud-computing division as a sort of diagnostics-as-a-service. Microsoft Corp., a top cloud rival, is also working on predictive AI services. To commercialize an offering, Google would first need to get its hands on more records, which tend to vary widely across health providers. Google could buy them, but that may not sit as well with regulators or consumers. The deals with UCSF and the University of Chicago aren’t commercial. For now, the company says it’s too early to settle on a business model. At Google’s annual developer conference in May, Lily Peng, a member of Medical Brain, walked through the team’s research outmatching humans in spotting heart disease risk. "Again," she said. "I want to emphasize that this is really early on." < Here >
×
×
  • Create New...