Jump to content

Search the Community

Showing results for tags 'artificial intelligence'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 50 results

  1. Artificial intelligence and the future of smartphone photography 3-D sensors coming to this year's smartphones are just the tip of a wave of machine learning-driven photography that will both correct shortcomings of smartphone pictures and also provide some stunning new aspects of photography. Photography has been transformed in the age of the smartphone. Not only is the pose different, as in the case of the selfie, but the entire nature of the process of light being captured by phone cameras is something else altogether. Cameras are no longer just a lense and a sensor, they are also the collection of algorithms that instantly manipulate images to achieve photographic results that would otherwise require hours of manipulation via desktop software. Photography has become computational photography. Continued advances in machine learning forms of artificial intelligence will bring still more capabilities that will make today's smartphone pictures look passé. Recent examples of the state of the art on phones are Alphabet's Google's Pixel 3 smartphone pictures, and Apple's iPhone X photos. In the former case, Google has used machine learning to capture more detail under low-light conditions, so that night scenes look like daylight. These are simply not shots that ever existed in nature. They are super-resolution pictures. And Apple, starting with iPhone X in 2017, added "bokeh," the artful blurring of elements outside of the focal point. This was not achieved via aspects of the lens itself, as is the case in traditional photography, but rather by a computational adjustment of the pixels after the image is captured. It's quite possible 2019 and 2020's breakthrough development will be manipulating the perspective of an image to improve it. Hopefully, that will lead to a correction of the distortions inherent in smartphone photography that make them come up short next to digital single-lens-reflex (DSLR) camera pictures. How a convolutional neural network, or CNN, attempts to reconstruct reality from a picture. From "Understanding the Limitations of CNN-based Absolute Camera Pose Regression," by Torsten Sattler of Chalmers University of Technology, Qunjie Zhou and Laura Leal-Taixe of TU Munich, and Marc Pollefeys of ETH Zürich and Microsoft. Sattler et al. They could, in fact, achieve results akin to what are known as "tilt-shift" cameras. In a tilt-shift camera, the lens is angled to make up for the angle at which a person is standing with the camera, and thereby correct the distortions that would be created in the image because of the angle between the individual and the scene. Tilt-shift capabilities can be had by DSLR owners in a variety of removable lenses from various vendors. The average phone camera has a lens barrel so tiny that everything it captures is distorted. Nothing is ever quite the right shape as it is in the real world. Most people may not notice or care, as they've become used to selfies on Instagram. But it would be nice if these aberrations could be ameliorated. And if they can, it would be a selling point for the next round of smartphones from Google, Apple, etc. Increasingly, the iPhone and other cameras will carry rear cameras with 3-D sensors. These sensors, made by the likes of Lumentum Holdings and other chip vendors, measure the depth of the surroundings of the phone by sending out beams of light and counting how they return to the phone after bouncing off objects. Techniques such as "time-of-flight" allow the phone to measure in detail the three-dimensional structure of the surrounding environment. Those sensors can take advantage of a vast body of statistical work that has been done in recent years to understand the relationship between 2-D images and the real world. Google's "Night Sight" feature on its Pixel 3 smartphones: scenes that never existed in nature. Google. A whole lot of work has been done with statistics to achieve the kinds of physics that go into tilt-shift lenses, both with and without special camera gear. For example, a technique called "RANSAC," or "random sample consensus," goes back to 1981 and is specifically designed to find landmarks in the 3-D world that can be mapped to points in a 2-D image plane, to know how the 2-D image correlates to three-dimensional reality. Using that technique, it's possible to gain a greater understanding about how a two-dimensional representation corresponds to the real-world. A team of researchers at the University of Florence in 2015 built on RANSAC to infer the setup of a pan-tilt-zoom camera by reasoning backward from pictures it took. They were able to tune the actuators, the motors that control the camera, to a fine degree by using software to analyze how much distortion is introduced into pictures with different placements of the camera. And they were able to do it for video, not just still images. From that time, there's been a steady stream of work to estimate objects in pictures, referred to as pose estimation, and a related task, simultaneous localization and mapping, or SLAM, which constructs in software a "cloud" of points in a 3-D scene that can be used to understand how much distortion is in a digital photo. Researchers at the University of Erlangen-Nürnberg in Germany and the Woods Hole Oceanographic Institution in 2017 showed off a Python library, called CameraTransform, which lets one reckon the real dimensions of an object in the world by working backward from the image taken. Seeing around corners: a neural network created by researchers to infer objects occluded in a picture, consisting of an encoder-decoder combined with a generative adversarial network. Courtesy of Helisa Dhamo, Keisuke Tateno, Iro Laina, Nassir Navab, and Federico Tombari of the Technical University of Munich, with support from Canon, Inc. Dhamo et al. Last year, researchers at the Technical University of Munich, Germany and Canon, Inc. showed it's possible to take a single image and infer what's in the scene that's occluded by another object. Called a "layered depth image," it can create new scenes by removing an object from a photo, revealing the background that the camera never saw, but that was computed from the image. The approach uses the familiar encoder-decoder approach found in many neural network applications, to estimate the depth of a scene, and a "generative adversarial network," or GAN, to construct the parts of the scene that were never actually in view when the picture was taken. All that research is bubbling up and is going to culminate in some fantastic abilities for the next crop of smartphone cameras, equipped with 3-D sensors. The results of this line of research should be stunning. At the very least, one can imagine portraits taken on smartphones that no longer have strange distortions of people's faces. Super-resolution pictures of architecture will be possible that create parallel lines by evening out all the distortions in the lens. The smartphone industry will be able to claim another victory over the DSLR market as phones churn out pictures with stunning levels of accuracy and realism. But, of course, the long-term trend for smartphone photography is away from realism, toward more striking effects that were not possible before computational photography. And so we may see uses of 3-D sensing that tend toward the surreal. For example, tilt-shift cameras can be used to create some strangely beautiful effects, such as narrowing the depth of field of the shot to an extreme degree. That has the effect of making landscapes look as if they're toy models, in an oddly satisfying way. There are apps for phones that will do something similar, but the effect of having 3-D sensors coupled to AI techniques will go well beyond what those apps achieve. There are techniques for achieving tilt-shift in Photoshop, but it will be much more satisfying to have the same effects come right out of the camera with each press of the shutter button. Down the road, there'll be another stage that will mean a lot in terms of advancing machine learning techniques. It's possible to forego the use of 3-D sensors and just use a convolutional neural network, or CNN, to infer the coordinates in space of objects. That would save on the expense of building the sensors into phones. However, currently, such software-only approaches produce poor results, as discussed in a report out this week by researchers at Microsoft and academic collaborators. Known as "absolute pose regression," the software-only approach failed to generalize, they write, after training, meaning that whatever techniques the CNN acquired didn't correctly estimate geometry when tested with novel images. The authors consider their work "an important sanity check" for software-only efforts, and they conclude that "there is still a significant amount of research to be done before pose regression approaches become practically relevant." How will that work get done? Not by researchers alone. It will be done by lots of smartphone owners. With the newest models, containing 3-D sensors, they will snap away their impressive 3-D sensing-enhanced pictures. While they do so, their device, or the cloud, will be keeping track of how real-world geometry correlates to 2-D images. It will be using all that activity, in other words, to keep learning. Some day, with enough 3-D shots, the CNN, or whatever algorithm is used, will be smart enough to look at the world and know exactly what it's like even without help from 3-D depth perception. Source
  2. The future of graphics is unquestionably AI image generation: Jensen Huang Nvidia CEO tells ZDNet that a new take on Quake II points the way to the future. For old-school gamers who fondly remember rocket jumps, there was a point during the GTC keynote address by Nvidia CEO Jensen Huang that would warm the cockles of the heart: An updated version of Quake II to make use of the ray tracing capabilities of the company's RTX GPUs. The key to this demonstration was denoising, Huang told ZDNet on Tuesday. "Denoising is basically using AI to infer the missing pixels," he said. "If I disable the denoising for you yesterday, what it would look like is a black, dark spotty, kind of fuzzy image, but when we turn on denoising, it looks like what you saw yesterday. Incredible." On Monday, members of the research arm of Nvidia showed off how they can use a neural network to create a high-quality image from a rough sketch, and besides a few boundary issues with different elements, it looked rather realistic. "It's like a colouring book picture that describes where a tree is, where the sun is, where the sky is," Nvidia vice president of applied deep learning research Bryan Catanzaro said at the time. "And then the neural network is able to fill in all of the detail and texture, and the reflections, shadows, and colours based on what it has learned about real images." The challenge with this technology, according to Huang, will be how to speed it up to real-time. "It's very hard -- and the reason for that is because you don't have very much time," he told ZDNet. "You only have about roughly two or three milliseconds to do it -- not one or two seconds." Responsibility for this increase falls to the Tensor cores used in Nvidia's Turing architecture. "The Tensor core processor is so fast, and our goal is to now use that Tensor core, to teach it, how to guess, otherwise infer, the missing pixels," Huang said. "We are doing a lot of work in AI-inferred image-generation. It is unquestionably the future." Quake II in its original colour and lighting scheme (Image: Nvidia) Earlier on Tuesday, Google announced its Stadia cloud gaming service, a space Nvidia has been playing in for some time with its GeForce-as-a-service offer, GeForce Now. The product is currently in a closed beta with 300,000 users, 15 datacentres, and 1 million further people on a waiting list. Although he would not be drawn on direct comparisons between the services, Huang told journalists that there are basic laws of physics to deal with. "The latency, the speed of light matters in the round-trip time of video games," he said. "The round trip time of cloud gaming -- we're the best, GeForce Now is the best in the world -- round trip time is 70ms. For esports, you would have been shot 10 times, so it's not perfect for esports. Now there are many games, and for some people their framerate is only 30fps .... so for some games, it's fine." The Nvidia CEO said GeForce Now is actively avoiding trying to become a Netflix of a gaming, due to large gaming franchises and publishers straddling the industry. "We believe that the video game industry is largely occupied, the best platforms are largely occupied by, call it, five or 10 games -- and these five to 10 games are powerful franchises, they're not going to put it on your store," he said. "And so our strategy is to leave the economics completely to the publishers, to not get in the way of their relationship with the gamer. Because Blizzard, World of Warcraft, has their gamers, and PUBG has their gamers, League of Legends has their gamers, and the publishers are very, very strong, and they would like to have a great relationship with their gamers, and we would like not to get in the way of that." Rather, GeForce Now is a way for Nvidia to move beyond its current 200 million userbase, and chase the "billion PC customers, who don't have GeForces" because the economics does not stack up, or their current hardware is limited in some way from high-powered gaming. With latency being key, Nvidia is looking to get its servers across the globe. In particular, to partner with telcos under the GeForce Now Alliance banner, like SoftBank and LG Uplus which on Monday announced that they would deploy ray tracing capable RTX Serversin Japan and Korea. Huang added that GeForce Now is expected to be upgraded to RTX hardware in either the third or fourth quarter of the year. Source
  3. Technology: the latest trends affecting travel Travel Technology Europe returned to Olympia London for two days of innovation, networking and news last month. Andrew Doherty reports on the latest tech trends captivating travel. Artificial intelligence, New Distribution Capability (NDC) and digital transformation were among the hot topics discussed at Travel Technology Europe (TTE), which celebrated its 16th anniversary in February. More than 100 technology brands and start-ups attended the exhibition to showcase and sell their wares to prospective buyers, with 120 industry professionals invited to speak at a series of talks and seminars that ran in tandem with the main show. However, it was at the C-suite Question Time panel that leading marketers, chief information and technology officers convened to talk about how technology was influencing their businesses and travel more generally. Panellists comprised Emil Majkowski, chief information officer and architect at Rentals United; Simon Hamblin, chief technology officer at dnata Travel; Clare de Bono, head of product and innovation at Amadeus; Phil Scully, chief information officer at Costa Coffee; and Suzie Thompson, vice-president of marketing, distribution and revenue management at Red Carnation Hotels. Moderating the discussion was Charlotte Lamp Davies, principal consultant at A Bright Approach, who kicked off proceedings by asking speakers how they were embracing technology in 2019. De Bono said Amadeus would continue to work on NDC – an XML-based communication standard between airlines and travel agents, which offers access to a broader airline inventory from participating carriers, including ancillary products. “For our agent partners, this means bringing NDC, low-cost carriers and new application programming interfaces (APIs) to a centralised platform.” From a customer-facing position, Thompson explained Red Carnation Hotels would be future-proofing its marketing strategies. “We want to have a data cleanup and use artificial intelligence to obtain healthier insights to use in our marketing efforts,” she said. Meanwhile, at dnata Travel, Hamblin said scalability would take precedence in its tech strategy. He said: “Apart from bringing all of our legacy systems together, we want to work on our Yalago bedbank brand. The engine is currently handling more than 10 million requests an hour. We want it to cope with 100 million.” Driving innovation The session also focused on practical steps travel companies can take to attract tech talent. De Bono said businesses must empower IT teams by offering access to the latest technologies on offer. “We [Amadeus] have moved off mainframes; 600 million lines of code were rewritten in 2017. “We’re also leveraging Amazon Web Services and Google Cloud Platform, which is really helping us recruit. Not only do teams want to work on things that make a difference, but they also want to work with new tech.” Majkowski said companies that have a dedicated tech strategy would help retain talent. “When teams feel like their ideas can be implemented, they will stay. They don’t want to jump between projects.” In order to successfully drive digital transformation, Hamblin said travel companies must encourage innovation from the top down. “It’s so important that everyone understands what’s happening, why the digital transformation is taking place and how it will impact them. Leaving legacy systems behind is difficult – the only way to get through it is by breaking them down one by one and demonstrating how the new programmes can deliver.” Red Carnation’s Thompson explained companies must first assess the value of digital transformation before assigning a budget to facilitate it. “If digital transformation – investing in artificial intelligence or virtual reality, for example – doesn’t benefit the customer, then we as a business won’t do it,” she said. “It must enhance their experience.” De Bono said Amadeus was developing AI technology to help travel agents suggest relevant holiday options. Alita (Amadeus Linguistics Intelligent Travel Assistant) observes customer behaviour to predict, recommend and personalise trips using natural language processing and machine learning. Because Alita is still a prototype, there is currently no time-frame as to when it will be rolled out. Looking to the future, Majkowski predicted automated payment systems would become more common in travel businesses. “I think we will see more revenue management programmes using machine learning to facilitate data distribution, while Blockchain will help with managing identity.” Scully added: “One-tap payments will go a long way to facilitating a frictionless customer experience too. Unless that happens, we will continue to queue and still get frustrated.” Source
  4. SAS to invest $1 billion in AI for industry uses, education, R&D The AI investment is part of SAS' efforts to make data, AI, machine learning and algorithms more return driven and consumable. SAS said it will invest $1 billion in artificial intelligence over the next three years as it develops its analytics platform, educates data scientists and targets industry-specific use cases. The investment is part of SAS' effort to build a higher profile. SAS is an analytics and data science pioneer, but the privately-held company has been quietly retooling its business and products. SAS' artificial intelligence investment will focus on AI research and development, education initiatives such as its certifications and data science academy and services to create better returns on projects. Oliver Schabenberger, chief technology and operating officer at SAS, will give a keynote at the Nvidia GPU Technology Conference on Tuesday. SAS' AI efforts will revolve around embedding AI into its platform and creating tools for data management, customer analytics, fraud and security and risk management. The company will also aim to meld AI and the Internet of things data for industries ranging from financial services to manufacturing to healthcare. We caught up with Schabenberger to talk about SAS's strategy in December. Here were the key points on the company's strategy and where AI fits in: SAS in recent years "hasn't been as visible as it could have been," said Schabenberger. But the company has been making pivots to software as a service, connecting its platform to other analytics tools and targeting industries better. SAS has been focused on "how our offering can bring analytics to areas undiscovered," he added SAS has also been focused on targeting a wide range of companies beyond large enterprises and making its offering more consumable. The company is entering a results as a service model where customers come with business problems and SAS can help solve them with its expertise in analytics, machine learning and data science. Most of the company's customers are on-premises, but migrating to cloud workloads at different paces, said Schabenberger. As that migration continues, SAS is focused on bridging the gap between those consuming the data and those preparing it and programming. "Data needs to be consumed," said Schabenberger. "Our offering today is more than what SAS became known for." SAS is cloud agnostic and will plug into Google Cloud Platform, AWS and Microsoft Azure. The value from SAS comes from expertise in developing machine learning models and algorithms. "Our strength is embedding algorithms and bringing them into action to process data into a user model and solution," said Schabenberger. Source
  5. Artificial intelligence progress gets gummed up in silos and cultural issues Survey finds things slow-going for AI and robotic process automation, Silos have always been considered a bad thing for enterprise IT environments, and today's push for artificial intelligence and other cognitive technologies is no exception. A recent survey shows fewer than 50% of enterprises have deployed any of the "intelligent automation technologies" -- such as artificial intelligence (AI) and robotic process automation (RPA). IT leaders participating in the survey say data and applications within their companies are too siloed to make it work. Photo: Joe McKendrick That's the gist of a survey of 500 IT executives, conducted by IDG in partnership with Appian. The majority of executives, 86%, say they seek to achieve high levels of integration between human work, AI, and RPA over the coming year. The problem is they have a long way to go -- at this time, only 12% said their companies do this really well today. Where are the problems? Two-thirds of executives, 66%, stated that they "have difficulty integrating existing IT investments and skills with demanding AI and RPA technology." Notably, 43% cite changing the IT culture as an obstacle to AI and RPA. While the survey report's authors did not spell out what kind of changes were required, it can be assumed that IT culture is hampered by a need for a constant maintenance and firefighting, versus focusing more on innovation. There may be also issues with communication between the IT and business sides of the house -- as well as interacting more with data science types. Some of these issues may eventually see some relief through agile and DevOps initiatives. Additional issues that hold back AI and RPA progress include concerns about security, cited by 41%, and application development issues seen by 34% of the group. Again, this was not elaborated in the report, but application development roadblocks likely stem from lack of proper tools to build AI-driven applications, along with the need for skills development or refreshes. In addition, linking automation efforts to improving customer experiences was problematic. Two-thirds of executives, 66%, say they Needed a Better Multi-Channel Buying Experience. However, 26% lack the systems to deliver integrated multi-channel customer experiences, and 22% need to build or buy software to implement multi-channel customer experiences. Another 21% say they even lack a strategy for delivering integrated multi-channel customer experiences. At this point, it appears AI and RPA are mainly the tools of the largest corporations with humongous IT staffs. While there are deployments of individual emerging automation technologies, a lack of strategy and clear alignment to business goals is resulting in siloed deployments and overwhelmed internal application development teams. Less than half of surveyed companies have deployed any form of intelligent automation. Fully half of those companies boast IT staffs in excess of 20,000 employees. Source
  6. ThisPersonDoesNotExist.com uses AI to generate endless fake faces Hit refresh to lock eyes with another imaginary stranger A few sample faces — all completely fake — created by ThisPersonDoesNotExist.com The ability of AI to generate fake visuals is not yet mainstream knowledge, but a new website — ThisPersonDoesNotExist.com— offers a quick and persuasive education. The site is the creation of Philip Wang, a software engineer at Uber, and uses research released last year by chip designer Nvidia to create an endless stream of fake portraits. The algorithm behind it is trained on a huge dataset of real images, then uses a type of neural network known as a generative adversarial network (or GAN) to fabricate new examples. “Each time you refresh the site, the network will generate a new facial image from scratch,” wrote Wang in a Facebook post. He added in a statement to Motherboard: “Most people do not understand how good AIs will be at synthesizing images in the future.” The underlying AI framework powering the site was originally invented by a researcher named Ian Goodfellow. Nvidia’s take on the algorithm, named StyleGAN, was made open source recently and has proven to be incredibly flexible. Although this version of the model is trained to generate human faces, it can, in theory, mimic any source. Researchers are already experimenting with other targets. including anime characters, fonts, and graffiti. As we’ve discussed before at The Verge, the power of algorithms like StyleGAN raise a lot of questions. On the one hand there are obvious creative applications for this technology. Programs like this could create endless virtual worlds, as well as help designers and illustrators. They’re already leading to new types of artwork. Then there are the downsides. As we’ve seen in discussions about deepfakes (which use GANs to paste people’s faces onto target videos, often in order to create non-consensual pornography), the ability to manipulate and generate realistic imagery at scale is going to have a huge effect on how modern societies think about evidence and trust. Such software could also be extremely useful for creating political propaganda and influence campaigns. In other words, ThisPersonDoesNotExist.com is just the polite introduction to this new technology. The rude awakening comes later. Source
  7. Google and Microsoft warn investors that bad AI could harm their brand As AI becomes more common, companies’ exposure to algorithmic blowback increases Illustration by Alex Castro / The Verge For companies like Google and Microsoft, artificial intelligence is a huge part of their future, offering ways to enhance existing products and create whole new revenue streams. But, as revealed by recent financial filings, both firms also acknowledge that AI — particularly biased AI that makes bad decisions — could potentially harm their brands and businesses. These disclosures, spotted by Wired, were made in the companies’ 10-K forms. These are standardized documents that firms are legally required to file every year, giving investors a broad overview of their business and recent finances. In the segment titled “risk factors,” both Microsoft and Alphabet, Google’s parent company, brought up AI for the first time. From Alphabet’s 10-K, filed last week: These disclosures are not, on the whole, hugely surprising. The idea of the “risk factors” segment is to keep investors informed, but also mitigate future lawsuits that might accuse management of hiding potential problems. Because of this they tend to be extremely broad in their remit, covering even the most obvious ways a business could go wrong. This might include problems like “someone made a better product than us and now we don’t have any customers,” and “we spent all our money so now don’t have any”. But, as Wired’s Tom Simonite points out, it is a little odd that these companies are only noting AI as a potential factor now. After all, both have been developing AI products for years, from Google’s self-driving car initiative, which began in 2009, to Microsoft’s long dalliance with conversational platforms like Cortana. This technology provides ample opportunities for brand damage, and, in some cases, already has. Remember when Microsoft’s Tay chatbot went live on Twitter and started spouting racist nonsense in less than a day? Years later, it’s a still regularly cited as an example of AI gone wrong. However, you could also argue that public awareness of artificial intelligence and its potential adverse affects has grown hugely over the past year. Scandals like Google’s secret work with the Pentagon under Project Maven, Amazon’s biased facial recognition software, and Facebook’s algorithmic incompetence with the Cambridge Analytica scandal have all brought the problems of badly-implemented AI into the spotlight. (Interestingly, despite similar exposure, neither Amazon nor Facebook mention AI risk in their latest 10-Ks.) And Microsoft and Google are doing more than many companies to keep abreast of this danger. Microsoft, for example, is arguing that facial recognition software needs to be regulated to guard against potential harms, while Google has started the slow business of engaging with policy makers and academics about AI governance. Giving investors a head’s up too only seems fair. Source
  8. Habana, the AI chip innovator, promises top performance and efficiency Habana is the best kept secret in AI chips. Designed from the ground up for machine learning workloads, it promises superior performance combined with power efficiency to revolutionize everything from data centers in the cloud to autonomous cars. As data generation and accumulation accelerates, we've reached a tipping point where using machine learning just works. Using machine learning to train models that find patterns in data and make predictions based on those is applied to pretty much everything today. But data and models are just one part of the story. Also: The AI chip unicorn that's about to revolutionize everything Another part, equally important, is compute. Machine learning consists of two phases: Training and inference. In the training phase patterns are extracted, and machine learning models that capture them are created. In the inference phase, trained models are deployed and fed with new data in order to generate results. Both of these phases require compute power. Not just any compute in fact, as it turns out CPUs are not really geared towards the specialized type of computation required for machine learning workloads. GPUs are currently the weapon of choice when it comes to machine learning workloads, but that may be about to change. AI CHIPS JUST GOT MORE INTERESTING GPU vendor Nvidia has reinvented itself as an AI chip company, coming up with new processors geared specifically towards machine learning workloads and dominating this market. But the boom in machine learning workloads has whetted the appetite of others players, as well. Also: AI chips for big data and machine learning Cloud vendors such as Google and AWS are working on their own AI chips. Intel is working on getting FPGA chips in shape to support machine learning. And upstarts are having a go at entering this market as well. GraphCore is the most high profile among them, with recent funding having catapulted it into unicorn territory, but it's not the only one: Enter Habana. Habana has been working on its own processor for AI since 2015. But as Eitan Medina, its CBO told us in a recent discussion, it has been doing so in stealth until recently: "Our motto is AI performance, not stories. We have been working under cover until September 2018". David Dahan, Habana CEO, said that "among all AI semiconductor startups, Habana Labs is the first, and still the only one, which introduced a production-ready AI processor." As Medina explained, Habana was founded by CEO David Dahan and VP R&D Ran Halutz. Both Dahan and Halutz are semi-conductor industry veterans, and they have worked together for years in semiconductor companies CEVA and PrimeSense. The management team also includes CTO Shlomo Raikin, former Intel project architect. Medina himself also has an engineering background: "Our team has a deep background in machine learning. If you Google topics such as quantization, you will find our names," Medina said. And there's no lack of funding or staff either. Habana just closed a Round B financing round of $75 million, led by Intel Capital no less, which brings its total funding to $120 million. Habana has a headcount of 120 and is based in Tel Aviv, Israel, but also has offices and R&D in San Jose, US, Gdansk, Poland, and Beijing, China. This looks solid. All these people, funds, and know-how have been set in motion by identifying the opportunity. Much like GraphCore, Habana's Medina thinks that the AI chip race is far from over, and that GPUs may be dominating for the time being, but that's about to change. Habana brings two key innovations to the table: Specialized processors for training and inference, and power efficiency. SEPARATING TRAINING AND INFERENCE TO DELIVER SUPERIOR PERFORMANCE Medina noted that starting with a clean sheet to design their processor, one of the key decisions made early on was to address training and inference separately. As these workloads have different needs, Medina said that treating them separately has enabled them to optimize performance for each setting: "For years, GPU vendors have offered new versions of GPUs. Now Nvidia seems to have realized they need to differentiate. We got this from the start." Also: AI Startup Gyrfalcon spins plethora of chips for machine learning Habana offers two different processors: Goya, addressing inference; and Gaudi, addressing training. Medina said that Goya is used in production today, while Gaudi will be released in Q2 2019. We wondered what was the reason inference was addressed first. Was it because the architecture and requirements for inference are simpler? Medina said that it was a strategic decision based on market signals. Medina noted that the lion's share of inference workloads in the cloud still runs on CPUs. Therefore, he explained, Habana's primary goal at this stage is to address these workloads as a drop-in replacement. Indeed, according to Medina, Habana's clients at this point are to a large extent data center owners and cloud providers, as well as autonomous cars ventures. The value proposition in both cases is primarily performance. According to benchmarks published by Habana, Goya is significantly faster than both Intel's CPUs and Nvidia's GPUs. Habana used the well-known RES-50 benchmark, and Medina explained the rationale was that RES-50 is the easiest to measure and compare, as it has less variables. Medina said other architectures must make compromises: "Even when asked to give up latency, throughput is below where we are. With GPUs / CPUs, if you want better performance, you need to group data input in big groups of batches to feed the processor. Then you need to wait till entire group is finished to get the results. These architectures need this, otherwise throughput will not be good. But big batches are not usable. We have super high efficiency even with small batch sizes." There are some notable points about these benchmarks. The first, Medina pointed out, is that their scale is logarithmic, which is needed to be able to accommodate Goya and the competition in the same charts. Hence the claim that "Habana smokes inference incumbents." The second is that results become even more interesting if power efficiency is factored in. POWER EFFICIENCY AND THE SOFTWARE STACK Power efficiency is a metric used to measure how much power is needed per calculation in benchmarks. This is a very important parameter. It's not enough to deliver superior performance alone, the cost of delivering this is just as important. A standard metric to measure processor performance is IPS, Instructions Per Second. But IPS/W, or IPS per Watt, is probably a better one, as it takes into account the cost of delivering performance. Also: Alibaba to launch own AI chip next year Higher power efficiency is better in every possible way. Thinking about data centers and autonomous vehicles, minimizing the cost of electricity, and increasing autonomy are key requirements. And in the bigger picture, lowering carbon footprint, is a major concern for the planet. As Medina put it, "You should care about the environment, and you should care about your pocket." Goya's value proposition for data centers is focused on this, also factoring in latency requirements. As Medina said, for a scenario of processing 45K images/second, three Goya cards can get results with a latency of 1,3 msec, replacing 169 CPU servers with a latency of 70 msec plus 16 Nvidia Tesla V100 with a latency of 2,5 msec with a total cost around $400,000. The message is clear: You can do more with less. TPC, Habana's Tensor Processor Core at the heart of Goya, supports different form factors, memory configurations, and PCIe cards, as well as mixed-precision numeric. It is also programmable in C, and accessible via what Habana calls the GEMM engine (General Matric Multiplication). This touches upon another key aspect of AI chips: The software stack, and integrations with existing machine learning frameworks. As there is a slew of machine learning frameworks people use to build their models, supporting as many of them as seamlessly as possible is a key requirement. Goya supports models trained on any processor via an API called SynapseAI. At this point, SynapseAI supports TensorFlow, mxnet and ONNX, an emerging exchange format for deep learning models, and is working on adding support for PyTorch, and more. Users should be able to deploy their models on Goya without having to fiddle with SynapseAI. For those who wish to tweak their models to include customizations, however, the option to do so is there, as well as IDE tools to support them. Medina said this low-level programming has been requested by clients who have developed custom ways of maximizing performance on their current setting and would like to replicate this on Goya. THE BIGGER PICTURE So, who are these clients, and how does one actually become a client? Medina said Habana has a sort of screening process for clients, as they are not yet at the point where they can ship massive quantities of Goya. Habana is sampling Goya to selected companies only at this time. That's what's written on the form you'll have to fill in if you're interested. Also: AI Startup Cornami reveals details of neural net chip Not that Goya is a half-baked product, as it is used in production according to Medina. Specific names were not discussed, but yes, these include cloud vendors, so you can let your imagination run wild. Medina also emphasized its R&D on the hardware level for Goya is mostly done. However, there is ongoing work to take things to the next level with 7 nanometer chips, plus work on the Gaudi processor for training, which promises linear scalability. In addition, development of the software stack never ceases in order to optimize, add new features and support for more frameworks. Recently, Habana also published open source Linux drivers for Goya, which should help a lot considering Linux is what powers most data centers and embedded systems. Habana, just like GraphCore, seems to have the potential to bring about a major disruption in the AI chip market and the world at large. Many of its premises are similar: A new architecture, experienced team, well funded, and looking to seize the opportunity. One obvious difference is on how they approach their public image, as GraphCore has been quite open about their work, while Habana was a relative unknown up to now. And the obvious questions -- which one is faster/better, which one will succeed, can they dethrone Nvidia -- we simply don't know. GraphCore has not published any benchmarks. Judging from an organization maturity point of view, Habana seems to be lagging at this point, but that does not necessarily mean much. One thing we can say is that this space is booming, and we can expect AI chip innovation to catalyze AI even further soon. The takeaway from this, however, should be to make power efficiency a key aspect of the AI narrative going forward. Performance comes at a price, and this should be factored in. Source
  9. Can AI help crack the code of fusion power? ‘It’s sort of this beautiful synergy between the human and the machine’ Part ofThe Real-World AI Issue With the click of a mouse and a loud bang, I blasted jets of super-hot, ionized gas called plasma into one another at hundreds of miles per second. I was sitting in the control room of a fusion energy startup called TAE Technologies, and I’d just fired its $150 million plasma collider. That shot was a tiny part of the company’s long pursuit of a notoriously elusive power source. I was at the company’s headquarters to talk to them about the latest phase of their hunt that involves an algorithm called the Optometrist. Nuclear fusion is the reaction that’s behind the Sun’s energetic glow. Here on Earth, the quixotic, expensive quest for controlled fusion reactions gets a lot of hype and a lot of hate. (To be clear, this isn’t the same process that happens in a hydrogen bomb. That’s an uncontrolled fusion reaction.) The dream is that fusion power would mean plenty of energy with no carbon emissions or risk of a nuclear meltdown. But scientists have been pursuing fusion power for decades, and they are nowhere near adding it to the grid. Last year, a panel of advisers to the US Department of Energy published a list of game-changers that could “dramatically increase the rate of progress towards a fusion power plant.” The list included advanced algorithms, like artificial intelligence and machine learning. It’s a strategy that TAE Technologies is banking on: the 20-year-old startup began collaborating with Google a few years ago to develop machine learning tools that it hopes will finally bring fusion within reach. Norman, TAE’s $150 million plasma collider. Photo by Brennan King and Weston Reel Attempts at fusion involve smacking lightweight particles into one another at temperatures high enough that they fuse together, producing a new element and releasing energy. Some experimentscontrol a super-hot ionized gas called plasma with magnetic fields inside a massive metal doughnut called a tokamak. Lawrence Livermore National Laboratory fires the world’s largest laser at a tiny gold container with an even tinier pellet of nuclear fuel inside. TAE twirls plasma inside a linear machine named Norman, tweaking thousands of variables with each shot. It’s impossible for a person to keep all of those variables in their head or to change them one at a time. That’s why TAE is collaborating with Google, using a system called the Optometrist algorithm that helps the team home in on the ideal conditions for fusion. We weren’t sure what to make of all the hype surrounding AI or machine learning or even fusion energy for that matter. So the Verge Science video team headed to TAE’s headquarters in Foothill Ranch, California, to see how far along it is, and where — if anywhere — AI entered the picture. You can watch what we found in the video above. Ultimately, we found a lot of challenges but a lot of persistent optimism, too. “The end goal is to have power plants that are burning clean fuels that are abundant, [and] last for as long as humanity could last,” says Erik Trask, a lead scientist at TAE. “Now, we think that we have found a way to do it, but we have to show it. That’s the hard part.” Source
  10. Could This Technology Make Amazon Go Stores Obsolete? A start-up company is promising to offer similar artificial intelligence tools that are scalable for larger retailers. (AMZN) is revolutionizing how we shop with its cashierless Amazon Go convenience stores, and hopes to expand the concept from fewer than 10 now into a network of 3,000 stores in only two years. There may be logistical issues in building out so many stores in such a short time, as well as political ones if states decide all stores must accept cash. But the biggest challenge may be competing technology that allows more retailers to offer their own no-cashier, no-checkout shopping. It could make the Amazon Go chain obsolete. IMAGE SOURCE: AMAZON.COM. The dawn of smart carts Amazon Go's "just walk out" shopping experience requires a store to be outfitted with machine vision, deep-learning algorithms, and an array of cameras and sensors to watch a customer's every move. It knows what every item is, and when it's been picked up and put back, so it can charge a shopper's account. A start-up called Caper is offering similar technology that is more accessible to more retailers. Rather than outfit an entire store with such advanced artificial intelligence (AI), Caper puts it into individual shopping carts. It lets supermarkets more easily compete against Amazon without the massive cost necessary to build new stores or retrofit a chain's existing infrastructure. Shoppers put items into AI-powered carts, which identify the products and ring up the total. Interactive screens on the carts not only keep a running tally of the order, but can also direct shoppers to in-store specials. The technology used by Caper's partners does currently require customers to scan each item into the shopping cart screens, but they're using it to train the deep learning algorithm to enable shopping without scanning. When they are finished shopping, customers can pay via the screen and leave. While bagging up a week's worth of groceries could slow you down, this system seems likely to encourage shoppers to bring their own bags into the store and fill them up as they go. That's often how the scan-and-go technologies work so you're not stuck at the register still having to bag all your items. Amazon Go's limitations The original Amazon Go location was only 1,800 square feet, about the size of a convenience store, but it was estimated to cost at least $1 million to install all the cameras and sensors. Analysts estimate that to build out 3,000 stores would cost some $3 billion. While Amazon is testing a range of store sizes, some as large as 2,400 square feet, it's clear the concept is prohibitively expensive to apply to something on the order of a full-sized supermarket and is why they're mostly stocked with convenience items that you can literally grab-and-go. A Walmart store can run anywhere from 30,000 square feet to over 200,000 for one of its supercenters. Even Amazon's Whole Foods Markets average 40,000 square feet. Retrofitting one of these locations would also seem to be nearly impossible since all-new shelving and displays would be needed to incorporate the cameras and sensors. Many grocery chains like Kroger have added so-called scan-and-go capabilities, where customers use a scanning gun or their smartphones to scan in each item they buy, and then upload the data to a register at checkout. But it's a clunkier system than Amazon's effortless grab-and-go technology; Walmart killed off its own scan-and-go system because it said shoppers weren't using it. Scaling up artificial intelligence Caper says its cart technology is currently in place at two grocery store chains, with plans to roll it out to 150 more this year. The company's website lists six retail partners from the New York area including C-Town, Key Food, and Pioneer supermarkets. It says its shopping carts have increased the value of the average basket by 18% because customers are exposed to products they might otherwise miss or can't find. Amazon Go is certainly poised to upend the convenience store industry with its advanced AI wizardry. But Caper could revolutionize the much broader and larger supermarket sector and make Amazon's stores obsolete by bringing the same sort of technology to your neighborhood grocery store in a package that's far more scalable. Source
  11. In the coming months, seeing won't be believing thanks to increasingly convincing computer-generated deepfake videos David Doran Breaking news: a video “leaks” of a famous terrorist leader meeting in secret with an emissary from a country in the Middle East. News organisations air the video as an “unconfirmed report”. American officials can neither confirm nor deny the authenticity of the video – a typically circumspect answer on intelligence matters. The US president condemns the country that would dare hold secret meetings with this reviled terrorist mastermind. Congress discusses imposing sanctions. A diplomatic crisis ensues. Perhaps, seeing an opportunity to harness public outrage, the president orders a cruise missile strike on the last known location of the terrorist leader. All of this because of a few seconds of film – a masterful fabrication. In 2019, we will for the first time experience the geopolitical ramifications of a new technology: the ability to use machine learning to falsify video, imagery and audio that convincingly replicates real public figures. “Deepfakes” is the term becoming shorthand for a broad range of manipulated video and imagery, including face swaps (identity swapping), audio deepfakes (voice swapping), deepfake puppetry (mapping a target’s face to an actor’s for facial reenactment), and deepfake lip-synching (synthetic video created to match an audio file and footage of their face). This term was coined in December 2017 by a Reddit user of the same name, who used open-source artificial intelligence tools to paste celebrities’ faces on to pornographic video clips. A burgeoning community of online deepfake creators followed suit. Deepfakes will continue to improve in ease and sophistication as developers create better AI and new techniques that make it easier to create falsified videos. The telltale signs of a faked video – subjects not blinking, flickering of the facial outline, over-centralised facial features – will become less obvious and, eventually, imperceptible. Ultimately, maybe in a matter of a few years, it will be possible to synthetically generate footage of people without relying on any existing footage. (Current deepfakes need stock footage to provide the background for swapped faces.) Perpetrators of co-ordinated online disinformation operations will gladly incorporate new, AI-powered digital impersonations to advance political goals, such as bolstering support for a military campaign or to sway an electorate. Such videos may also be used simply to undermine public trust in media. Aside from geopolitical meddling or disinformation campaigns, it’s easy to see how this technology could have criminal, commercial applications, such as manipulating stock prices. Imagine a rogue state creating a deepfake that depicts a CEO and CFO furtively discussing missing expectations in the following week’s quarterly earnings call. Before releasing the video to a few journalists, they would short the stock – betting on the stock price plummeting when the market overreacts to this “news”. By the time the video is debunked and the stock market corrects, the perpetrator has already made away with a healthy profit. Perhaps the most chilling realisation about the rise of deepfakes is that they don’t need to be perfect to be effective. They need to be just good enough that the target audience is duped for just long enough. That’s why human-led debunking and the time it requires will not be enough. To protect people from the initial deception, we will need to develop algorithmic detection capabilities that can work in real time, and we need to conduct psychological and sociological research to understand how online platforms can best inform people that what they’re watching is fabricated. In 2019, the world will confront a new generation of falsified video deployed to deceive entire populations. And we may not realise the video is fake until we’ve already reacted – maybe overreacted. Indeed, it may take such an overreaction for us all to consider how we relate to fast-moving information online, not only from a technological and platform point of view, but from the perspective of everyday citizens and the mistaken assumption that “seeing is believing”. Source
  12. Artificial intelligence (AI) is expected to grow in the cybersecurity marketplace, likely to $18.2 billion by 2023, according to a report from P&S Market Research. AI is still only in its nascent stages, though, and the technologies present several obstacles that organizations must overcome, according to a new white paper by Osterman Research. In a survey sponsored by ProtectWise, Osterman Research learned that AI has penetrated the security operations center (SOC), but there are many challenges that stand in the way of AI being able to deliver on its promises. The survey found that AI has already established a strong foothold, with 73% of respondents reporting that they have implemented security solutions with at least some aspect of AI. Most organizations said their top reasons for incorporating AI were to improve the efficiency of security staff members and to make alert investigations faster. In addition, survey results concluded that 55% of IT executives are the strongest advocates for AI, while only 38% of AI’s strongest supporters identified as non-IT executives. That difference was evidenced in the reported inconsistencies from respondents who reflected on the results of their initial deployment of AI-enabled security products. Participants confessed that AI-enabled security solutions have significantly more security alerts and false positives on a typical day, with 61% of respondents agreeing that creating and implementing rules is burdensome and 52% citing they have no plan to implement additional AI-enabled security solutions in the future. More than half (61%) of all respondents said that AI doesn’t stop zero-days and advanced threats, 54% said it delivers inaccurate results and 42% said it’s difficult to use. Additionally, 71% said it’s expensive. While there is still progress that needs to be made, the survey found the future of AI has great potential. “Our bottom line assessment is that AI is not yet 'there,' but offers the promise of improving the speed of processing alerts and false positives, particularly in organizations that receive massive numbers of both. Moreover, while the full potential of AI has yet to be realized, it holds the promise of seriously addressing the cybersecurity skills shortage – it may not be a 'silver bullet,' but it may be a silver-plated one,” the survey said. source
  13. Deep inside the Earth, miles down in many cases, rock-sealed pockets hold buried treasures. These hydrocarbon reservoirs are packed with organic compounds that make the world go ‘round. When the contents are extracted and refined, the resulting oil and gas help light cities, transport people and run industries. For some engineers at BP, Job One is locating the reservoirs. Job Two is accurately predicting what percentage of hydrocarbons are retrievable, also known as “recovery factor.” Traditionally, that task has been iterative, resource-heavy and can have an element of human bias. Data scientists, tapping their own expertise and experiences, may try six or seven different algorithms as they work to dial in the best prediction model. This can take weeks. But by using Azure Machine Learning service, BP is working to reduce the time needed to pinpoint prediction models while also boosting the productivity of its data scientists. Automated machine learning empowers customers to identify an end-to-end, machine-learning pipeline for any problem. Transform recently caught up with Manish Naik, BP’s principal for digital innovation, at his London office to learn more about the company’s new method for drilling down into its data. TRANSFORM: What is the value that BP gains by improving its recovery factor forecasts? MANISH NAIK: This prediction of recovery factor from underlying data is a crucial activity – the basis of key decisions made by the company that are potentially worth billions of dollars. This data is vast and complex, involving hundreds of geological properties or features. To complement the current ways of prediction, which tend to have some qualitative input, we decided to explore machine learning to see if we can improve prediction. We sought to answer these questions: Can we improve the quality of the prediction? Can we eliminate some of the human bias? Manish Naik. Courtesy of BP. TRANSFORM: How do your data scientists use automated machine learning? NAIK: They give it broad direction. With one line of code, it runs through different algorithms within the prediction family and the different parameter (or variable) combos that previously were manually tested by the scientists. The power of the cloud comes in here. The results are comparable to what the data scientists produced. TRANSFORM: One line of code, wow. How much time does this save them? NAIK: Depending on the amount of data, type of activity – such as the prediction or classification – and algorithm family, automated machine learning could potentially reduce the effort down from weeks to days or days to hours. TRANSFORM: How often is the prediction model that BP developed for recovery factor now used across the company? NAIK: This model is in production and used by hundreds of subject matter experts globally in BP on a daily basis. TRANSFORM: As automated machine learning becomes a core tool for BP’s data scientists, what are the larger, potential benefits for the company? NAIK: It will make data scientists more productive, which means faster time to market for machine-learning (ML) projects And as data scientists continue to use more and more of automated machine learning, they will develop trust in the output it provides. That can become a starting point for the work of our data scientists. In the future, this will form a part of a robust benchmarking process for all ML projects, thus improving quality. TRANSFORM: More broadly, in what ways do you foresee Artificial Intelligence (AI) and the cloud further reshaping the oil and gas industry? NAIK: Oil and gas companies across the value chain – from exploration to retail – generate significant amounts of data. This means there are lots of opportunities to exploit this data using AI, ML and cloud technologies. In broad terms, there is significant potential for these technologies to help improve the efficiency of our operations and help us make better, more accurate and informed decisions. source
  14. When Alice fell down the rabbit hole, she got everything she asked for and more; cute little rabbits with gloves on their hands, caterpillars that could talk, and mean, nasty flowers that thought she was a weed. It’s a comical story, but one that could become reality in the future. The expression of “going down the rabbit hole,” is sometimes used to describe just how far we are willing to push the limits. On a basic literary level, it speaks of the beginning of a fanciful adventure that we cannot understand — but one that will change our lives forever. It is with this in mind that computer scientists warn of a potential danger we may not even be aware of: Our tinkering with artificial intelligence could lead to an external brain or A.I. system that we will no longer have the ability to control. A recent editorial published on TechnologyReview.com — MIT’s resource for exploring new technologies — warned of the pace in which we are advancing technology. Recent algorithms are being designed at such a remarkable speed that even its creators are astounded. “This could be problem,” Will Knight, the author of the report writes. Knight describes 2016’s milestone of a self-driving car which was quietly released in New Jersey. Chip maker Nvidia differentiated its model from other companies such as Google, Tesla, or General Motors by having the car rely entirely on an algorithm that taught itself how to drive after “watching” a human do it. Nvidia’s car successfully taught itself how to drive, much to the delight of the company’s scientists. Nevertheless, Nvidia’s programmers were unsettled by how much (and how fast) the algorithm learned the skill. Clearly, the system was able to gather information and translate it into tangible results, yet exactly how it did this was not known. The system was designed so that information from the vehicle’s sensors was transmitted into a huge network of artificial neurons which would then process the data and deliver an appropriate command to the steering wheel, brakes, or other systems. These are responses that match a human driver. Though what would happen if the car did something totally unexpected — say, smash into a tree or run a red light? There are complex behaviors and actions that could potentially happen, and the very scientists who made the system struggle to come up with an answer. AI is learning…and it’s learning pretty darn fast Nvidia’s underlying A.I. technology is based on the concept of “deep learning,” which, up till now, scientists were not sure could be applied to robots. The theory of an external or artificial “thinking” brain is nothing new. This has colored our imaginations since the 1950’s. The sore lack of materials and gross manual labor needed to input all the data, however, have prevented the dream from coming to fruition. Nevertheless, advancements in technology have resulted in several breakthroughs, including the Nvidia self-driving car. Already there are aspirations to develop self-thinking robots that can write news, detect schizophrenia in patients, and approve loans, among other things. Is it exciting? Yes, of course it is; but scientists are worried about the unsaid implications of the growth. The MIT editorial says that “we [need to] find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur — and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.” In an effort to control these systems, some of the world’s largest technology firms have banded together to create an “A.I. ethics board.” As reported on DailyMail.co.uk, the companies involved are Amazon, DeepMind, Google, Facebook, IBM, and Microsoft. This coalition calls themselves the Partnership on Artificial Intelligence to Benefit People and Society and operate under eight ethics. The objective of this group is to ensure that advancements in technologies will empower as many people as possible, and be actively engaged in the development of A.I. so that each group is held accountable to their broad range of stakeholders. Just how far down the rabbit hole are we, as a society, planning on going? You can learn a little bit more when you visit Robotics.news. < Here >
  15. If science fiction has taught us anything it’s that artificial intelligence will one day lead to the downfall of the entirety of mankind. That day is (probably) still a long way away, if it ever actually happens, but for now we get to enjoy some of the nicer aspects of AI, such as its ability to write poetic masterpieces. Researchers in Australia in partnership with the University of Toronto have developed an algorithm capable of writing poetry. Far from your generic rhymes, this AI actually follows the rules, taking metre into account as it weaves its words. The AI is good. Really good. And it’s even capable of tricking humans into thinking that its poems were penned by a man instead of a machine. According to the researchers, the AI was trained extensively on the rules it needed to follow to craft an acceptable poem. It was fed nearly 3,000 sonnets as training, and the algorithm tore them apart to teach itself how the words worked with each other. Once the bot was brought up to speed it was tasked with crafting some poems of its own. Here’s a sample: With joyous gambols gay and still array No longer when he twas, while in his day At first to pass in all delightful ways Around him, charming and of all his days Not bad, huh? Of course, knowing that an AI made it might make it feel more stilted and dry than if you had read it without any preconceptions, but there’s no denying that it’s a fine poem. In fact, the poems written by the AI follow the rules of poetry even more closely than human poets like Shakespeare. I guess that’s the cold machine precision kicking in. When the bot’s verses were mixed with human-written poems, and then scoured by volunteers, the readers were split 50-50 over who wrote them. That’s a pretty solid vote of confidence in the AI’s favor, but there were still some things that gave the bot away, including errors in wording and grammar. Still, it’s a mighty impressive achievement. Perhaps when our robot overlords enslave humanity we’ll at least be treated to some nice poetry. < Here >
  16. In the year 2020, Tokyo will essentially be breaking a tech record when they become the very first Olympics to use facial recognition technology in an attempt to improve security. On Tuesday, the organizing committee announced that Tokyo will be utilizing the tech for indentifying officials, athletes, media, and staff at the 2020 Paralympics and Olympics Games. It will, however, not be utilized in the identification of any spectators who attend the events. The utilization of the facial recognition technology is an effort to both increase security and hasten the entrance of authorized individuals. The Japanese IT conglomerate, NEC Corp, is currently developing the system. The company will be collecting digital images of all authorized individuals in advance, and then storing them within a database so that cross-checking can be conducted at all entry points. The executive director of security for the Tokyo 2020 Games, Tsuyoshi Iwashita had this to say in the announcement on their blog post: “This latest technology will enable strict identification of accredited people compared with relying solely on the eyes of security staff, and also enables swift entry to venues which will be necessary in the intense heat of summer. I hope this will ensure a safe and secure Olympic and Paralympic Games and help athletes perform at their best.” New technology is often embraced by Olympic games. PyeongChang’s Winter Olympics in Korea used fuel-cell-powered Nexo SUVs for shuttling Olympic attendees around PyeongChang and Gangneung. Also, automatically inflated air bags were used by skiers as crash protection. Comment requests from NEC corps. and the Tokyo Organizing Committee were not immediately responded to. Could this be the first real step towards facial recognition being used as standard at large events? Let us know what you think about this article in the comments section below. < Here >
  17. The Department of Defense’s network is protected from malware threats by Sharkseer, one of the top National Security Agency or NSA cybersecurity programs. According to an agency spokeswoman, the DoD is transferring the Sharkseer program to the Defense Information Systems Agency as it aligns better with the DISA mission, according to NSA spokeswoman, Natalie Pittore. The transfer from NSA to DISA has been laid out in the in the National Defense Authorization Act. Back on July 23rd of this year, the NDAA was negotiated by Congress, but the actual hand-off seems to have been planned for a long time. According to congressional records, for numerous years, the top NSA officials have deemed the program as “among the highest cybersecurity initiatives “ Sharkseer utilizes artificial intelligence or AI to inspect incoming traffic and search for any possible vulnerabilities. The program, even at a basic level, examines all of the DoD incoming traffic for zero-day exploits and advanced threats on more personal levels. It also monitors documents, incoming traffic that could possibly infect the network, and emails. The program instantly automatically determines both the location and the identity of computer hosts that have either sent or received malware. Also, Sharkseer acts as a sort of “sandbox”, Pittore describes it as a government officials’ application used for testing files that look suspicious by utilizing automated behavior analysis. The DoD’s cybersecurity was criticized by Congress who accused it of being deployed in a “piece meal fashion.” However, they have actually given the Sharkseer program praise for apparently being a successful program and detecting more than 2 billion cyber events all across the DoD’s networks, both unclassified and classified according to a statement by Rep. Barbara Comstock, R-Va. Sharkseer seems to have escalated from a concept to an actual reality. This happened back in 2014, when $30 million of congressional funding was invested in the program. Congress has attempted to gain more funds for Sharkseer to help ensure fiscal years, but the amount that was eventually proportioned is unclear. Both houses of Congress still need to approve the NDAA and it also has to be signed off by the President, even though Sharkseer provision is not thought to be controversial. < Here >
  18. Doctors fed it hypothetical scenarios, not real patient data IBM’s Watson supercomputer gave unsafe recommendations for treating cancer patients, according to documents reviewed by Stat. The report is the latest sign that Watson, once hyped as the future of cancer research, has fallen far short of expectations. In 2012, doctors at Memorial Sloan Kettering Cancer Center partnered with IBM to train Watson to diagnose and treat patients. But according to IBM documents dated from last summer, the supercomputer has frequently given bad advice, like when it suggested a cancer patient with severe bleeding be given a drug that could cause the bleeding to worsen. (A spokesperson for Memorial Sloan Kettering said this suggestion was hypothetical and not inflicted on a real patient.) “This product is a piece of s—,” one doctor at Jupiter Hospital in Florida told IBM executives, according to the documents. “We bought it for marketing and with hopes that you would achieve the vision. We can’t use it for most cases.” The documents come from a presentation given by Andrew Norden, IBM Watson’s former deputy health chief, right before he left the company. In addition to showcasing customer dissatisfaction, they reveal problems with methods, too. Watson for Oncology was supposed to synthesize enormous amounts of data and come up with novel insights. But it turns out most of the data fed to it is hypothetical and not real patient data. That means the suggestions Watson made were simply based off the treatment preferences of the few doctors providing the data, not actual insights it gained from analyzing real cases. An IBM spokesperson told Gizmodo that Watson for Oncology has “supported care for more than 84,000 patients” and is still learning. Apparently, it’s not learning the right things. < Here >
  19. Software developed at the University of Bonn can accurately predict future actions Summary: Scientists have developed software that can look minutes into the future: The program learns the typical sequence of actions, such as cooking, from video sequences. Then it can predict in new situations what the chef will do at which point in time. Computer scientists from the University of Bonn have developed software that can look a few minutes into the future: The program first learns the typical sequence of actions, such as cooking, from video sequences. Based on this knowledge, it can then accurately predict in new situations what the chef will do at which point in time. Researchers will present their findings at the world's largest Conference on Computer Vision and Pattern Recognition, which will be held June 19-21 in Salt Lake City, USA. The perfect butler, as every fan of British social drama knows, has a special ability: He senses his employer's wishes before they have even been uttered. The working group of Prof. Dr. Jürgen Gall wants to teach computers something similar: "We want to predict the timing and duration of activities -- minutes or even hours before they happen," he explains. A kitchen robot, for example, could then pass the ingredients as soon as they are needed, pre-heat the oven in time -- and in the meantime warn the chef if he is about to forget a preparation step. The automatic vacuum cleaner meanwhile knows that it has no business in the kitchen at that time, and instead takes care of the living room. We humans are very good at anticipating the actions of others. For computers however, this discipline is still in its infancy. The researchers at the Institute of Computer Science at the University of Bonn are now able to announce a first success: They have developed self-learning software that can estimate the timing and duration of future activities with astonishing accuracy for periods of several minutes. Training data: four hours of salad videos The training data used by the scientists included 40 videos in which performers prepare different salads. Each of the recordings was around 6 minutes long and contained an average of 20 different actions. The videos also contained precise details of what time the action started and how long it took. The computer "watched" these salad videos totaling around four hours. This way, the algorithm learned which actions typically follow each other during this task and how long they last. This is by no means trivial: After all, every chef has his own approach. Additionally, the sequence may vary depending on the recipe. "Then we tested how successful the learning process was," explains Gall. "For this we confronted the software with videos that it had not seen before." At least the new short films fit into the context: They also showed the preparation of a salad. For the test, the computer was told what is shown in the first 20 or 30 percent of one of the new videos. On this basis it then had to predict what would happen during the rest of the film. That worked amazingly well. Gall: "Accuracy was over 40 percent for short forecast periods, but then dropped the more the algorithm had to look into the future." For activities that were more than three minutes in the future, the computer was still right in 15 percent of cases. However, the prognosis was only considered correct if both the activity and its timing were correctly predicted. Gall and his colleagues want the study to be understood only as a first step into the new field of activity prediction. Especially since the algorithm performs noticeably worse if it has to recognize on its own what happens in the first part of the video, instead of being told. Because this analysis is never 100 percent correct -- Gall speaks of "noisy" data. "Our process does work with it," he says. "But unfortunately nowhere near as well." Sample test videos and predictions derived from them are available at https://www.youtube.com/watch?v=xMNYRcVH_oI < Here >
  20. President Brad Smith warns authorities might track, investigate or arrest people based on flawed evidence Microsoft has called for facial recognition technology to be regulated by government, with for laws governing its acceptable uses. In a blog post on the company’s website on Friday, Microsoft president Brad Smith called for a congressional bipartisan “expert commission” to look into regulating the technology in the US. “It seems especially important to pursue thoughtful government regulation of facial recognition technology, given its broad societal ramifications and potential for abuse,” he wrote. “Without a thoughtful approach, public authorities may rely on flawed or biased technological approaches to decide who to track, investigate or even arrest for a crime.” Microsoft is the first big tech company to raise serious alarms about an increasingly sought-after technology for recognising a person’s face from a photo or through a camera. In May, US civil liberties groups had called on Amazon to stop offering facial recognition services to governments, warning that the software could be used to unfairly target immigrants and people of colour. While Smith called some uses for the technology positive and “potentially even profound” – such as finding a missing child or identifying a terrorist – he said other potential applications were “sobering”. “Imagine a government tracking everywhere you walked over the past month without your permission or knowledge,” he wrote. “Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. “Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first.” Smith said the need for government action “does not absolve technology companies of our own ethical responsibilities”. “We recognise that one of the difficult issues we’ll need to address is the distinction between the development of our facial recognition services and the use of our broader IT infrastructure by third parties that build and deploy their own facial recognition technology,” he wrote. He said Microsoft, which supplies face recognition to some businesses, has already rejected some customers’ requests to deploy the technology in situations involving “human rights risks”. A Microsoft spokeswoman declined to provide more details about what opportunities the company has passed over because of ethical concerns. Smith also defended the company’s contract with US Immigration and Customs Enforcement, saying it did not involve face recognition. < Here >
  21. System can track multiple people across scenes, despite changing camera angles BIG BLUE IBM has used its Watson artificial intelligence (AI) tech to develop a new algorithm for multi-face tracking. The system uses AI to track multiple individuals across scenes, despite changing camera angles, lighting, and appearances. Collaborating with Professor Ying Hung of the Department of Statistics and Biostatistics in Rutgers University, IBM Watson researcher Chung-Ching Lin led a team of scientists to develop the technology, using a method to spot different individuals in a video sequence. The system is also able to recognise if people leave and then re-enter the video, even if they look very different. To create this innovation in AI, Lin explained that the team first made 'tracklets' for the people present in the source material. "The tracklets are based on co-occurrence of multiple body parts (face, head and shoulders, upper body, and whole body), so that people can be tracked even when they are not fully in view of the camera - for example, their faces are turned away or occluded by other objects." Lin added: "We formulate the multi-person tracking problem as a graph structure with two types of edges." The first of these is 'spatial edges', which denote the connections of different body parts of a candidate within a frame and are used to generate the hypothesised state of a candidate. The second is 'temporal edges', which refer to the connections of the same body parts over adjacent frames and are used to estimate the state of each individual person in different frames. "We generate face tracklets using face-bounding boxes from each individual person's tracklets and extract facial feature for clustering," he added. To see how well the technology could perform, Lin and his team compared it against state-of-the-art methods in analysing challenging datasets of unconstrained videos. In one experiment, they used music videos, which feature high image quality but significant, rapid changes in the scene, camera setting, camera movement, makeup, and accessories, such as eyeglasses. "Our algorithm outperformed other methods with respect to both clustering accuracy and tracking," Lin added. "Clustering purity was substantially better with our algorithm compared with the other methods [and] automatically determined the number of people, or clusters, to be tracked without the need for manual video analysis." The algorithm and its performance are described in more detail in IBM's CVPR research paper, A Prior-Less Method for Multi-Face Tracking in Unconstrained Videos. < Here >
  22. Fast.ai - Part 1 - Lesson 1 - Annotated notes Building a world class image classifier with three lines of Python code The first lesson gives an introduction into the why and how of the fast.ai course, and you will learn the basics of Jupyter Notebooks and how to use the fast.ai library to build a world-class image classifier in three lines of Python. You will get a feel for what deep learning is and why it works, as well as possible applications you can build yourself. Introduction These are my annotated notes from the first lesson of the first part of the Fast.ai course. I’m taking the course for a second time, which means I’m re-watching the videos, reading the papers and making sure I am able to reproduce the code. As part of this process I’m writing down more detailed notes to help me better understand the material. Maybe they can be of help to you as well. Lesson takeaways By the end of the lesson you should know/understand the goal of fast.ai; how to use fast.ai; the “practical, top-down approach” of fast.ai; how to run Python code in Jupyter notebooks; how to apply the basic shortcuts in a Jupyter notebook; why we need GPU and where to access them; how to build your own image classifier using the fast.ai library; the basic building blocks of neural network; what deep learning is and why it works; how you can apply it yourself; the basics of how convolutional networks work; what gradient descent, a learning rate and an epoch are; how to set a good learning rate to train your model. Table of Contents TL;DR Introduction Lesson takeaways Table of Contents The goal of fast.ai The practical, top-down approach of fast.ai How to use fast.ai Fast.ai Part 1 course structure How to run Python code in Jupyter notebooks Jupyter Notebook basics Python 3 Jupyter Notebook shortcuts general shortcuts Code shortcuts Notebook configuration Reload extensions matplotlib inline Why we need a GPU and where to access them Options discussed in the lesson video Some more options My personal experience How to build your own classifier using the fast.ai library fast.ai library A labeled dataset Side note: Getting the data - PDL - Python Download Library Explore a data sample Train our first network architecture data learn learn.fit() Try it yourself What deep learning is and why it works Why we classify AlphaGo Fraud detection Artificial Intelligence > Machine Learning > Deep Learning Infinitely flexible function: A neural network All-purpose parameter fitting: gradient Descent Fast and scallable One more thing Putting it all together: examples of deep learning Digging a little deeper: The basics of Convolutional Networks Infinitely flexible function Gradient Descent and learning rate Putting it all together The goal for this lesson Notebooks used in the lesson Interesting links / Links from the lesson If interested, please take the course < here >.
  23. A video showing facial recognition software in use at the headquarters of the artificial intelligence company Megvii in Beijing.CreditGilles Sabrie for The New York Times ZHENGZHOU, China — In the Chinese city of Zhengzhou, a police officer wearing facial recognition glasses spotted a heroin smuggler at a train station. In Qingdao, a city famous for its German colonial heritage, cameras powered by artificial intelligence helped the police snatch two dozen criminal suspects in the midst of a big annual beer festival. In Wuhu, a fugitive murder suspect was identified by a camera as he bought food from a street vendor. With millions of cameras and billions of lines of code, China is building a high-tech authoritarian future. Beijing is embracing technologies like facial recognition and artificial intelligence to identify and track 1.4 billion people. It wants to assemble a vast and unprecedented national surveillance system, with crucial help from its thriving technology industry. “In the past, it was all about instinct,” said Shan Jun, the deputy chief of the police at the railway station in Zhengzhou, where the heroin smuggler was caught. “If you missed something, you missed it.” China is reversing the commonly held vision of technology as a great democratizer, bringing people more freedom and connecting them to the world. In China, it has brought control. In some cities, cameras scan train stations for China’s most wanted. Billboard-size displays show the faces of jaywalkers and list the names of people who can’t pay their debts. Facial recognition scanners guard the entrances to housing complexes. Already, China has an estimated 200 million surveillance cameras — four times as many as the United States. Such efforts supplement other systems that track internet use and communications, hotel stays, train and plane trips and even car travel in some places. Even so, China’s ambitions outstrip its abilities. Technology in place at one train station or crosswalk may be lacking in another city, or even the next block over. Bureaucratic inefficiencies prevent the creation of a nationwide network. For the Communist Party, that may not matter. Far from hiding their efforts, Chinese authorities regularly state, and overstate, their capabilities. In China, even the perception of surveillance can keep the public in line. Some places are further along than others. Invasive mass-surveillance software has been set up in the west to track members of the Uighur Muslim minority and map their relations with friends and family, according to software viewed by The New York Times. “This is potentially a totally new way for the government to manage the economy and society,” said Martin Chorzempa, a fellow at the Peterson Institute for International Economics. “The goal is algorithmic governance,” he added. The Shame Game The intersection south of Changhong Bridge in the city of Xiangyang used to be a nightmare. Cars drove fast and jaywalkers darted into the street. Then last summer, the police put up cameras linked to facial recognition technology and a big, outdoor screen. Photos of lawbreakers were displayed alongside their names and government I.D. numbers. People were initially excited to see their faces on the board, said Guan Yue, a spokeswoman, until propaganda outlets told them it was punishment. “If you are captured by the system and you don’t see it, your neighbors or colleagues will, and they will gossip about it,” she said. “That’s too embarrassing for people to take.” China’s new surveillance is based on an old idea: Only strong authority can bring order to a turbulent country. Mao Zedong took that philosophy to devastating ends, as his top-down rule brought famine and then the Cultural Revolution. His successors also craved order but feared the consequences of totalitarian rule. They formed a new understanding with the Chinese people. In exchange for political impotence, they would be mostly left alone and allowed to get rich. It worked. Censorship and police powers remained strong, but China’s people still found more freedom. That new attitude helped usher in decades of breakneck economic growth. Today, that unwritten agreement is breaking down. China’s economy isn’t growing at the same pace. It suffers from a severe wealth gap. After four decades of fatter paychecks and better living, its people have higher expectations. Xi Jinping, China’s top leader, has moved to solidify his power. Changes to Chinese law mean he could rule longer than any leader since Mao. And he has undertaken a broad corruption crackdown that could make him plenty of enemies. For support, he has turned to the Mao-era beliefs in the importance of a cult of personality and the role of the Communist Party in everyday life. Technology gives him the power to make it happen. “ Reform and opening has already failed, but no one dares to say it,” said Chinese historian Zhang Lifan, citing China’s four-decade post-Mao policy. “The current system has created severe social and economic segregation. So now the rulers use the taxpayers’ money to monitor the taxpayers.” Mr. Xi has launched a major upgrade of the Chinese surveillance state. China has become the world’s biggest market for security and surveillance technology, with analysts estimating the country will have almost 300 million cameras installed by 2020. Chinese buyers will snap up more than three-quarters of all servers designed to scan video footage for faces, predicts IHS Markit, a research firm. China’s police will spend an additional $30 billion in the coming years on techno-enabled snooping, according to one expert quoted in state media. Government contracts are fueling research and development into technologies that track faces, clothing and even a person’s gait. Experimental gadgets, like facial-recognition glasses, have begun to appear. Judging public Chinese reaction can be difficult in a country where the news media is controlled by the government. Still, so far the average Chinese citizen appears to show little concern. Erratic enforcement of laws against everything from speeding to assault means the long arm of China’s authoritarian government can feel remote from everyday life. As a result, many cheer on new attempts at law and order. “It’s one of the biggest intersections in the city,” said Wang Fukang, a college student who volunteered as a guard at the crosswalk in Xiangyang. “It’s important that it stays safe and orderly.” The Surveillance Start-Up Start-ups often make a point of insisting their employees use their technology. In Shanghai, a company called Yitu has taken that to the extreme. The halls of its offices are dotted with cameras, looking for faces. From desk to break room to exit, employees’ paths are traced on a television screen with blue dotted lines. The monitor shows their comings and goings, all day, everyday. In China, snooping is becoming big business. As the country spends heavily on surveillance, a new generation of start-ups have risen to meet the demand. Chinese companies are developing globally competitive applications like image and voice recognition. Yitu took first place in a 2017 open contest for facial recognition algorithms held by the United States government’s Office of the Director of National Intelligence. A number of other Chinese companies also scored well. A technology boom in China is helping the government’s surveillance ambitions. In sheer scale and investment, China already rivals Silicon Valley. Between the government and eager investors, surveillance start-ups have access to plenty of money and other resources. In May, the upstart A.I. company SenseTime raised $620 million, giving it a valuation of about $4.5 billion. Yitu raised $200 million last month. Another rival, Megvii, raised $460 million from investors that included a state-backed fund created by China’s top leadership. At a conference in May at an upscale hotel in Beijing, China’s security-industrial complex offered its vision of the future. Companies big and small showed off facial-recognition security gates and systems that track cars around cities to local government officials, tech executives and investors. Private companies see big potential in China’s surveillance build-out. China’s public security market was valued at more than $80 billion last year but could be worth even more as the country builds its capabilities, said Shen Xinyang, a former Google data scientist who is now chief technology officer of Eyecool, a start-up. “Artificial intelligence for public security is actually still a very insignificant portion of the whole market,” he said, pointing out that most equipment currently in use was “nonintelligent.” Many of these businesses are already providing data to the government. Mr. Shen told the group that his company had surveillance systems at more than 20 airports and train stations, which had helped catch 1,000 criminals. Eyecool, he said, is also handing over two million facial images each day to a burgeoning big-data police system called Skynet. At a building complex in Xiangyang, a facial-recognition system set up to let residents quickly through security gates adds to the police’s collection of photos of local residents, according to local Chinese Communist Party officials. Wen Yangli, an executive at Number 1 Community, which makes the product, said the company is at work on other applications. One would detect when crowds of people are clashing. Another would allow police to use virtual maps of buildings to find out who lives where. China’s surveillance companies are also looking to test the appetite for high-tech surveillance abroad. Yitu says it has been expanding overseas, with plans to increase business in regions like Southeast Asia and the Middle East. At home, China is preparing its people for next-level surveillance technology. A recent state-media propaganda film called “Amazing China” showed off a similar virtual map that provided police with records of utility use, saying it could be used for predictive policing. “If there are anomalies, the system sends an alert,” a narrator says, as Chinese police officers pay a visit to an apartment with a record of erratic utility use. The film then quotes one of the officers: “No matter which corner you escape to, we’ll bring you to justice.” Enter the Panopticon For technology to be effective, it doesn’t always have to work. Take China’s facial-recognition glasses. Police in the central Chinese city of Zhengzhou recently showed off the specs at a high-speed rail station for state media and others. They snapped photos of a policewoman peering from behind the shaded lenses. But the glasses work only if the target stands still for several seconds. They have been used mostly to check travelers for fake identifications. China’s national database of individuals it has flagged for watching — including suspected terrorists, criminals, drug traffickers, political activists and others — includes 20 million to 30 million people, said one technology executive who works closely with the government. That is too many people for today’s facial recognition technology to parse, said the executive, who asked not to be identified because the information wasn’t public. The system remains more of a digital patchwork than an all-seeing technological network. Many files still aren’t digitized, and others are on mismatched spreadsheets that can’t be easily reconciled. Systems that police hope will someday be powered by A.I. are currently run by teams of people sorting through photos and data the old-fashioned way. Take, for example, the crosswalk in Xiangyang. The images don’t appear instantaneously. The billboard often shows jaywalkers from weeks ago, though recently authorities have reduced the lag to about five or six days. Officials said humans still sift through the images to match them to people’s identities. Still, Chinese authorities who are generally mum about security have embarked on a campaign to convince the country’s people that the high-tech security state is already in place. China’s propagandists are fond of stories in which police use facial recognition to spot wanted criminals at events. An article in the People’s Daily, the Communist Party’s official newspaper, covered a series of arrests made with the aid of facial recognition at concerts of the pop star Jackie Cheung. The piece referenced some of the singer’s lyrics: “You are a boundless net of love that easily trapped me.” In many places, it works. At the intersection in Xiangyang, jaywalking has decreased. At the building complex where Number 1 Community’s facial-recognition gate system has been installed, a problem with bike theft ceased entirely, according to building management. “The whole point is that people don’t know if they’re being monitored, and that uncertainty makes people more obedient,” said Mr. Chorzempa, the Peterson Institute fellow. He described the approach as a panopticon, the idea that people will follow the rules precisely because they don’t know whether they are being watched. In Zhengzhou, police were happy to explain how just the thought of the facial recognition glasses could get criminals to confess. Mr. Shan, the Zhengzhou railway station deputy police chief, cited the time his department grabbed a heroin smuggler. While questioning the suspect, Mr. Shan said, police pulled out the glasses and told the man that what he said didn’t matter. The glasses could give them all the information they needed. “Because he was afraid of being found out by the advanced technology, he confessed,” said Mr. Shan, adding that the suspect had swallowed 60 small packs of heroin. “We didn’t even use any interrogation techniques,” Mr. Shan said. “He simply gave it all up.” < Here >
  24. Using what one expert calls a ‘Wizard of Oz technique’, some companies keep their reliance on humans a secret from investors It’s hard to build a service powered by artificial intelligence. So hard, in fact, that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans. “Using a human to do the job lets you skip over a load of technical and business development challenges. It doesn’t scale, obviously, but it allows you to build something and skip the hard part early on,” said Gregory Koberger, CEO of ReadMe, who says he has come across a lot of “pseudo-AIs”. “It’s essentially prototyping the AI with human beings,” he said. This practice was brought to the fore this week in a Wall Street Journal article highlighting the hundreds of third-party app developers that Google allows to access people’s inboxes. In the case of the San Jose-based company Edison Software, artificial intelligence engineers went through the personal email messages of hundreds of users – with their identities redacted – to improve a “smart replies” feature. The company did not mention that humans would view users’ emails in its privacy policy. The third parties highlighted in the WSJ article are far from the first ones to do it. In 2008, Spinvox, a company that converted voicemails into text messages, was accused of using humans in overseas call centres rather than machines to do its work. In 2016, Bloomberg highlighted the plight of the humans spending 12 hours a day pretending to be chatbots for calendar scheduling services such as X.ai and Clara. The job was so mind-numbing that human employees said they were looking forward to being replaced by bots. In 2017, the business expense management app Expensify admitted that it had been using humans to transcribe at least some of the receipts it claimed to process using its “smartscan technology”. Scans of the receipts were being posted to Amazon’s Mechanical Turk crowdsourced labour tool, where low-paid workers were reading and transcribing them. “I wonder if Expensify SmartScan users know MTurk workers enter their receipts,” said Rochelle LaPlante, a “Turker” and advocate for gig economy workers on Twitter. “I’m looking at someone’s Uber receipt with their full name, pick-up and drop-off addresses.” Even Facebook, which has invested heavily in AI, relied on humans for its virtual assistant for Messenger, M. In some cases, humans are used to train the AI system and improve its accuracy. A company called Scale offers a bank of human workers to provide training data for self-driving cars and other AI-powered systems. “Scalers” will, for example, look at camera or sensor feeds and label cars, pedestrians and cyclists in the frame. With enough of this human calibration, the AI will learn to recognise these objects itself. In other cases, companies fake it until they make it, telling investors and users they have developed a scalable AI technology while secretly relying on human intelligence. Alison Darcy, a psychologist and founder of Woebot, a mental health support chatbot, describes this as the “Wizard of Oz design technique”. “You simulate what the ultimate experience of something is going to be. And a lot of time when it comes to AI, there is a person behind the curtain rather than an algorithm,” she said, adding that building a good AI system required a “ton of data” and that sometimes designers wanted to know if there was sufficient demand for a service before making the investment. This approach was not appropriate in the case of a psychological support service like Woebot, she said. “As psychologists we are guided by a code of ethics. Not deceiving people is very clearly one of those ethical principles.” Research has shown that people tend to disclose more when they think they are talking to a machine, rather than a person, because of the stigma associated with seeking help for one’s mental health. A team from the University of Southern California tested this with a virtual therapist called Ellie. They found that veterans with post-traumatic stress disorder were more likely to divulge their symptoms when they knew that Ellie was an AI system versus when they were told there was a human operating the machine. Others think companies should always be transparent about how their services operate. “I don’t like it,” said LaPlante of companies that pretend to offer AI-powered services but actually employ humans. “It feels dishonest and deceptive to me, neither of which is something I’d want from a business I’m using. “And on the worker side, it feels like we’re being pushed behind a curtain. I don’t like my labour being used by a company that will turn around and lie to their customers about what’s really happening.” This ethical quandary also raises its head with AI systems that pretend to be human. One recent example of this is Google Duplex, a robot assistant that makes eerily lifelike phone calls complete with “ums” and “ers” to book appointments and make reservations. After an initial backlash, Google said its AI would identify itself to the humans it spoke to. “In their demo version, it feels marginally deceptive in a low-impact conversation,” said Darcy. Although booking a table at a restaurant might seem like a low-stakes interaction, the same technology could be much more manipulative in the wrong hands. What would happen if you could make lifelike calls simulating the voice of a celebrity or politician, for example? “There’s already major fear around AI and it’s not really helping the conversation when there’s a lack of transparency,” Darcy said. < Here >
  25. An AI has defeated China’s top doctors when competing on detection of a brain tumors and predicting the hematoma expansion. The system defeated a team of doctors who were top 15 in the nation. The AI is named BioMind and it was developed by the Artificial Intelligence Research Centre for Neurological Disorders at Tiantan Hospital in Beijing. BioMind was correct 87% of the time while the doctors have reached 66% accuracy. While the AI took just 15 minutes to find 225 cases the doctors took double the time. The prediction of hematoma was successful with the accuracy of 83% while the doctors have scored only 63%. The AI researchers have trained the AI using thousands of images from Beijing Tiantan Hospital’s archives which gives it a similar level of ‘experience’ in identifying the neurological diseases as the most senior doctors. Wang Yongjun, executive vice president of the Beijing Tiantan Hospital advised that Xinhua didn’t actually care about the battle between doctors and AI as the aim was to show potential for collaboration in the future. “I hope through this competition, doctors can experience the power of artificial intelligence,” he said. “This is especially so for some doctors who are skeptical about artificial intelligence. I hope they can further understand AI and eliminate their fears toward it.” “It will be like a GPS guiding a car. It will make proposals to a doctor and help the doctor diagnose,” he said. “But it will be the doctor who ultimately decides, as there are a number of factors that a machine cannot take into consideration, such as a patient’s state of health and family situation.” This could be the biggest step forward in AI technology in improving healthcare so far; however only time will tell how patients will react to AI involvement in their care. < Here >
×
×
  • Create New...