Jump to content

Search the Community

Showing results for tags 'privacy'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 195 results

  1. iOS 14’s Best Privacy Feature? Catching Data-Grabbing Apps Apple's new operating system hasn't been released to the public yet, but its new permission notifications are already shaming developers into cleaning up their acts. Photographer: Nelson Ching/Bloomberg/Getty Images With every iOS update, users gain more controls over what data that app developers can collect about them. The new iOS 14 is no different, except for one thing—it hasn’t even left beta, and its privacy features are already causing havoc for major app developers. Privacy notifications, which pop up whenever an app accesses the microphone, camera, or clipboard, are responsible for many apps’ dubious data-collecting behaviors being outed in the past few weeks. It’s just one privacy feature in a laundry list of new privacy-preserving features on iOS 14, which include requiring developers to declare what data they collect on their app; giving users the ability to choose whether they share their approximate location with an app instead of their precise location; and requiring developers to get users’ permission if they want to track them for advertising purposes. But of all these additions, it’s the privacy notifications which have been causing chaos for app developers. It has been ratting out apps left and right ever since the beta was released back in June. Last week, Instagram became the latest app to be called out by iOS 14’s privacy notifications feature after users began noticing that the green light indicator—which alerts users that the camera has been activated—kept turning on—even when the camera was not in use. Addressing the behavior, Instagram said that the activation of the camera was just a bug and that it was being triggered by a user swiping into the camera from the Instagram feed. TikTok, LinkedIn, and Reddit have all so far been caught out by the new privacy notification, with users noticing that they were receiving alerts telling them that the apps were copying content from other apps every few keystrokes. All of them resolved to fix the issues. While Reddit blamed the behavior on a bug, TikTok said it was copying clipboard data as an antispam measure. LinkedIn said it copied clipboard data to perform an equality check between what the user was typing and what was in their clipboard. Apple is able to detect this behavior whenever an app accesses the camera, microphone, or clipboard because all apps have to communicate with Apple’s API. “Functions like the clipboard and microphone need to be accessed through the operating system. [Apple] can check whether the access was initiated by the user via a UI selection or were being performed unprompted by the application,” says Arosha Bandara, professor of software engineering at the Open University. Researchers have warned of several major apps storing clipboard data for a number of years, but the iOS 14 beta makes the behavior public for everyone to see for the first time. Security researchers Talal Haj Bakry and Tommy Mysk identified 53 apps that were found to be copying clipboard data without users’ consent back in March. “I believe that these privacy modifications are a huge step forward from a user perspective, because developers and Apple engineers knew about this before, but users didn't know about it,” says security engineer Anastasiia Voitova. “Now users can see, so it's making things transparent. Users can start asking questions.” Voitova says there are a few reasons why app developers may be collecting clipboard data. One of these reasons is for ad tracking purposes. “From an iOS perspective, I imagine there are quite a lot of apps that access the clipboard,” says Aidan Fitzpatrick, founder of app data firm Reincubate. “I imagine there are quite a lot of apps that abuse what’s on the clipboard to boost engagement in their app or learn more about you.” Apps from game developer Popcap and Airbnb’s HotelTonight app, which had both been seen capturing clipboard data, told The Telegraph that it had traced the behavior back to tools from Google and product-testing firm Apptimize, which both have third-party vendor libraries. This hints that the clipboard copying is unintentional on the app developer’s side and could just be a side effect of lazy coding. Many app developers take advantage of third-party app libraries to improve their apps, for example. It’s sometimes why unintentional clipboard-copying can occur. “The libraries inside the app gather the same permissions as the application itself, but developers often don't read the code of third-party libraries,” explains Voitova. “A developer might have really good intentions, but some libraries that they use can misuse permissions to do something bad.” There are, of course, also legitimate user experience reasons for why an app might want to access your clipboard without your permission. A delivery app, for example, might want to automatically paste a tracking number into the text field upon opening the app. But for the apps which are maliciously capturing clipboard data or using the microphone, these privacy notifications and light indicators could get them to change their dodgy behavior. The iOS 14 privacy notifications, for example, have already pushed TikTok, LinkedIn, Reddit, and Instagram to announce that they will code out the bug or stop the behavior altogether. Vice admitted that its Vice News app, which was flagged by Haj Bakry and Mysk, that it didn't even know their apps were accessing the clipboard until the iOS 14 beta was released. Still, it’s wise to remember that most permissions abuse happens on Google’s Android operating system. Last year, researchers from the International Computer Science Institute found that up to 1,325 Android apps were gathering data, despite the researchers' apps denying them permission to access that data. But whether Google decides to implement privacy notifications, however, is a different story. The company has not said whether it intends to implement a similar feature in the future, but recent versions of Android have been giving users more information about the data that apps collect. Maximilian Golla, a security researcher at the Max Planck Institute for Security and Privacy says that the business model on Android is different from iOS. “I wonder whether the app developers really want to change this, or Google really wants to implement such a feature, because they depend on this kind of tracking,” he thinks. “Google makes its money from Google AdSense, and I would be surprised if Google implements such a tracking notification.” So while privacy notifications are having the unintended consequence of forcing developers to change their tracking habits, this transparency culture shift might only occur on iOS. Ultimately, Fitzpatrick thinks that these privacy notifications are eventually going to flush tracking behavior out of iOS apps. “Either they're going to stop doing it or they're going to have to explain why,” he says. iOS 14’s Best Privacy Feature? Catching Data-Grabbing Apps
  2. Dear friends, Nowadays our privacy is very important. I am interested to know which VPN service do you use and which is the best according to your opinion. Not to all vpn services are enough secure. Recently, has been discovered that HotSpot Shield in some cases could show your real ip. Have a look here : 1.Android 2. Windows Thanks for your time spent with this poll ! :)
  3. How to Get Safari's New Privacy Features in Chrome and Firefox Apple's browser is getting serious about security protections. If you can't or won't switch, don't worry: You don't have to fall behind. You don't have to wait for macOS Big Sur to drop to get a lot of these upcoming features though—both Mozilla Firefox and Google Chrome have similar features.Photograph: Apple Apple just unveiled a raft of changes coming with the new macOS Big Sur later this year. Along with the visual redesign, the introduction of Control Center, and upgrades to Messages, the built-in Safari browser is getting new-and-improved privacy features to keep your data locked away. You don't have to wait for macOS Big Sur to drop to get a lot of these upcoming features though—both Mozilla Firefox and Google Chrome have similar features, or they can with the help of a third-party extension. Here's how you can get Firefox or Chrome up to par with Safari in macOS Big Sur today. The Changes Coming to Safari When macOS Big Sur arrives, Safari is going to look somewhat different. Courtesy of Apple Privacy and data protection are already big priorities for Safari, but the version coming with macOS Big Sur is going to go even further to protect you from being tracked on the web. Some of the existing features are becoming more visible, while Safari is also embracing more extensions, with as much care for user safety as possible. The browser already warns you against using passwords that are easily guessed or that you've used before (assuming they're saved in Safari's password locker), but the next version will also warn you if your email address, username, or password have been exposed in a data breach online—which would mean the need to take action and change your password would be even more urgent. A new Privacy Report button is getting added to the toolbar—you can click on this to see exactly which trackers Safari is blocking in its ongoing attempts to stop advertisers and companies from following you around the web. Safari is particularly good at stopping "fingerprinting," where various characteristics of your device (like screen resolution and operating system) are used to figure out who you are. This same Privacy Report is going to be displayed on your browser start page, which should give you a better idea of which sites are most aggressively trying to track you, as well as showing off the work that Safari is busy doing in the background. Safari in macOS Big Sur is also boosting support for extensions. (Safari already has extensions, but there aren't many of them.) New developer tools will make it easier for add-ons to be ported from Chrome and Firefox, and Safari is going to give users a suite of controls to limit the browsing data and other information that extensions are able to get access to. Adding Features to Chrome uBlock Origin is one Chrome extension that can block trackers. Screenshot: David Nield via Google Google already checks the passwords that it saves for you against a database of leaked credentials (besides warning about duplicates and passwords that could be easily guessed)—this is actually a Google account feature as well as a Chrome one. From the Chrome Settings panel, click Passwords then Check passwords to run an audit. You can already get some tracking data about a site by clicking the icon to the left of a URL in the address bar in Chrome (the icon will be either a padlock or an info bubble). To get even more tracking data, and to selectively block it, Safari-style, you can use an extension like uBlock Origin: One click shows you how many trackers are active on a page and which have been stopped by uBlock Origin. As well as stopping tracking across multiple sites, uBlock Origin also suppresses aggressive ads and protects against sites embedded with malware. A similar tool for Chrome that you can try is Disconnect—again, a single click blocks out tracking technologies, unwanted advertising, and social plug-ins (used by the likes of Facebook to see what you're up to when you're out and about across the web). Individual trackers and sites as a whole can be granted permission to operate outside of the restrictions put in place by uBlock Origin and Disconnect, which can be used for sites with responsible advertising that you want to support. As an added bonus, all of this tracking and blocking should mean a faster browsing experience too. Policing extension permissions isn't quite as easy in Chrome as it sounds like it will be in the next Safari upgrade, but you do have options: Choose More Tools then Extensions from the Chrome menu, then click Details next to any extension. The next page shows you the permissions the add-on has and lets you set when and how the utility can read your browsing data—on all sites (everywhere you go, without question), on specific sites (only on sites you specifically list), or on click (so you'll be asked for permission whenever access is required). Adding Features to Firefox Firefox comes with a host of privacy protections built in. Screenshot: David Nield via Firefox Firefox already packs plenty of user privacy and anti-tracking technology into its interface, so you don't need to do too much in the way of tweaking to get it up to par with the improvements that Apple just announced for Safari. It blocks more than 2,000 web trackers by default, for example, and warns you if your details are included in a data breach as part of its Firefox Monitor and Firefox Lockwise tools. Click the little purple shield icon to the left of the address bar on any site to see what Firefox has blocked, including advertising trackers, social media plug-ins, attempts to fingerprint your device, and more. Firefox will intelligently allow some plug-ins to run if blocking them would seriously compromise the functionality of the site—it's then your choice to continue using the site or find an alternative. To open a report on how these various measures are working over time, open the main Firefox menu and choose Privacy Protections. If you open up Preferences then Privacy & Security from the Firefox menu, you can choose how these measures (called Enhanced Tracking Protection) are applied. Three different modes of operation are available—Standard, Strict, and Custom—and it's possible to tailor the level of blocking for specific sites too. Enhanced Tracking Protection can be turned off for sites that you particularly trust, as well. It's fantastic having all of these features built right into Firefox, and it may be where Apple got some of its inspiration from for Safari, but plenty of third-party extensions are also available if you want to go even further. uBlock Origin and Disconnect are both available for Firefox as well as Chrome, for example, and both work in the same way: With one click on the browser toolbar you can see which adverts and trackers are being blocked. To keep watch over which extensions are allowed to what in Firefox, choose Add-ons then Extensions from the program menu. Click the three dots next to any extension to see the data and browser features that it has access to—for the time being you can't change this, though you can block add-ons from running in private browser windows. If an extension is using a permission that you're not happy with, you'll have to uninstall it. How to Get Safari's New Privacy Features in Chrome and Firefox
  4. Facebook's New Privacy Controls Are Long Overdue The new Manage Activity feature will let you archive and bulk delete posts for the first time. Photograph: Bjarte Rettedal/Getty Images Throughout its 16 years of existence, Facebook has struggled to provide the privacy controls users really want and need to safeguard the data they post to the platform. It's been a challenging project, with plenty of major detours along the way. But today Facebook is announcing a new tool for managing your posts. It may be the most intuitive version of the controls yet, because it's basically what Gmail has offered for email since the year Facebook launched. The new feature, known as Manage Activity will create the concept of an "archive" on Facebook, allowing you to move any or all of your past posts to a secret new home that only you can see. Manage Activity will also introduce a Facebook Trash folder so you can delete posts more easily. The new feature is rolling out in Facebook's mobile apps first. "Whether you're entering the job market after college or moving on from an old relationship, we know things change in people’s lives, and we want to make it easy for you to curate your presence on Facebook," the company says in a blog post. Screenshot: Facebook Archive and Trash are two concepts most people are familiar with from email. As with Gmail, posts you move to the trash will stay there for 30 days and then be deleted unless you manually eliminate them sooner or reinstate them. And Manage Activity is introducing batch actions for multiple posts at once so you can view and organize in bulk, rather than going post by post. You can filter by date ranges, types of posts (Photos and Videos, Posts From Other Apps, etc.), specific people, and other categories, and then select individual posts, batch-archive, or batch-delete as needed. If you archive or delete a post that people are tagged in they’ll lose access to the post. Facebook says they won't be notified of the change, though, so you can archive stealthily. You can't archive other people's posts that you are tagged in—this is only for content you've shared. Facebook already offers a Select Privacy menu for each of your posts where you can choose who can view the content; your options include Public, Friends, Specific friends, and so on. In some ways, archiving a post will be similar to choosing Only Me, in the sense that only you will be able to view the post and no one else. But if you've been using Only Me as a makeshift archiving feature, Manage Activity isn't going to help put everything in one place. All archived posts will show up in one place, but Facebook says there isn’t a way to view the collection of posts you've marked Only Me. After dedicating 2018 to figuring out how to "fix" Facebook, founder and CEO Mark Zuckerberg published "A Privacy-Focused Vision for Social Networking" in March 2019. In it he discussed a problem that every Facebook user has likely been aware of for more than a decade. "One challenge in building social tools is the 'permanence problem,'" Zuckerberg wrote. "As we build up large collections of messages and photos over time, they can become a liability as well as an asset. For example, many people who have been on Facebook for a long time have photos from when they were younger that could be embarrassing. But people also really love keeping a record of their lives." Midway through 2018, Zuckerberg promised a feature that would let users clear their browsing history from Facebook. It took more than a year for the tool, Off-Facebook Activity, to finally roll out in August 2019. The feature provides an accounting of the third-party websites and apps that share your visit history with Facebook and then gives you the option to clear that out. It also lets you block Facebook from using your browsing history for targeted ads. All the way back in November 2011, though, Zuckerberg was grappling with similar issues. "Facebook has always been committed to being transparent about the information you have stored with us—and we have led the internet in building tools to give people the ability to see and control what they share," he wrote. "But we can also always do better. I’m committed to making Facebook the leader in transparency and control around privacy." Nine years after those remarks, and 16 years after its founding, Facebook is somehow still in the process of going back to basics with its privacy features. Facebook's New Privacy Controls Are Long Overdue
  5. virendra

    WPD Stable 1.3.1532

    WPD Stable 1.3.1532 The real privacy dashboard for Windows WPD Stable 1.3.1532 - Beta N/A - Rules May 8, 2020 Site: https://wpd.app Sharecode: /get/latest.zip SUPPORT:https://wpd.app/donate/ A small but powerful portable tool working via Windows API. WPD is the most convenient and proper way to customize privacy related settings in Windows. Features Privacy management Customize Group Policy, Services, Tasks and other settings responsible for data collection and transmission. IP Blocker Block IP addresses of telemetry using rules from @crazy-max repository. Appx uninstaller Easy remove pre-installed Microsoft Store apps or any other appx. Portable, freeware, no ads, command line arguments support. Supported OS Windows 10 Enterprise 2004, 1909, 1903, 1809, 1803, 1709, 1703, 1607 Windows 10 Enterprise LTSC 2019, 2016, 2015 Windows 10 Education 2004, 1909, 1903, 1809, 1803, 1709, 1703 Windows 10 Pro 2004, 1909, 1903, 1809, 1803, 1709, 1703, 1607 Windows 10 Home 2004, 1909, 1903, 1809, 1803, 1709, 1703 Windows Server Standard 2016, 2019 Windows 8-8.1 Windows 7 Requirements .NET Framework 4.5+ ENJOY
  6. Just wanted to open a discussion in regards to building a somewhat secure PC or laptop in respect to privacy of the business/user. With everything slowly becoming closed source and with end users having no access to control the security of their systems (E:G - Spretre, Meltdown, Thunderbolt exploits, Intel Management Engine etc). I am very interested in building a computer that runs on open-source system software such as Linux, and has full access to CPU firmware code with features such as Libreboot or Coreboot. This has been something I have been researching for a while. Would be very interested in hearing from others with such a setup or also any ideas on the above topic. Any ideas in regards to implementing such system or just brainstorming on how to build a secure setup would be great. Im talking about open-source software, hardware switches, manually removing components such as microphones / cameras to prevent three letter agencies stockpiling data and hoarding it in fusion centers. For a bit of context, please watch this Documentary. Any feedback on systems by companies such as System76 and Purism and the like would be phenomenal! Thanks in advance!
  7. The VPN industry has exploded over the past few years. Fuelled by a greater awareness of online security, a desire to watch geo-restricted content, and yes, piracy, more people are hiding their online identities than ever. But did you know that many VPN providers are owned by the same few companies? A report from The Best VPN, shared exclusively with TNW, looks at five companies in particular — Avast, AnchorFree, StackPath, Gaditek and Kape Technologies. It found that over the past few years, these companies have acquired a total of 19 smaller players in the VPN space, including HideMyAss and CyberGhost VPN. AnchorFree The company with the most brands under its belt is AnchorFree. That’s not surprising since it’s the only firm on our list founded primarily to serve the VPN market. While the other three companies on the list own well-known and established VPN products, they also have a lot of other interests, particularly when it comes to information security services and products. The Best VPN was able to draw links between AnchorFree and seven smaller VPN brands. These include Hotspot Shield, Betternet, TouchVPN, VPN in Touch, Hexatech, VPN 360, and JustVPN. The report notes that AnchorFree isn’t consistently transparent when it comes to telling consumers what brands it owns. While some products carry the AnchorFree logo clearly (like Hotspot Shield), others require you to dig deep into the site’s terms-and-condition to find out who owns what StackPath The next company on the list is StackPath. The Best VPN describes it as a “huge cyber-security company,” and that’s accurate. The firm has raised over $180 million, with revenues of more than $157 million in 2017. Driving this success is a Batman’s utility-belt’s worth of sub-brands and products. These include several VPN brands (like IPVanish, StrongVPN, Encrypt.me), as well as CDN, cloud computing, and information security products. StackPath also provides the infrastructure required to launch a VPN service to other brands, thanks to its WLVPN service. This powers Pornhub’s VPN offering (predictably called VPNHub), as well as Namecheap VPN. Avast Avast is a Czech cybersecurity firm best known for its free antivirus software. Over the years, the company has quietly carved itself out a respectable position within the competitive VPN market. It owns three brands: HideMyAss, Avast Secureline VPN, AVG Secure VPN, and Zen VPN. It’s interesting to note that Avast got its hands on two of these products — namely HideMyAss and AVG Secure VPN — through its $1.3 billion acquisition of AVG Software in 2016. Kape and Gaditek With only two VPN brands apiece, Kape and Gaditek are the smallest companies on this list, but they couldn’t be any more different. Kape is primarily an investment vehicle focusing on the tech sector, and is listed on the London Stock Exchange. Gaditek, on the other hand, is a sprightly Pakistani startup based in the bustling city of Karachi. The jewel in Kape’s crown is Romania’s CyberGhost VPN, which it acquired for €9.2 million (roughly $9.7 million) in March, 2017. The following year, it bought another top-tier VPN provider, ZenMate. ZenMate claims more than 40 million users. Gaditek, on the other hand, focuses on the budget end of the market. It owns PureVPN and Ivacy, both of which offer ultra-affordable plans. Does this matter? There’s nothing wrong, or even especially inappropriate, about a larger player acquiring smaller rivals. Just look at Google, a company that has acquired more than 200 companies over its 20 year life. Acquisitions are the heart and soul of the technology business. But that doesn’t explain why the VPN market is so fragmented, with hardly any brands absorbed into their larger owners. Liviu Arsene, Senior E-threat analyst at Bitdefender, suggests that this merely reinforces the sense of privacy that’s vital for the success of a VPN product. Arsene also argued that allowing VPN providers to retain their independence after an acquisition could allow them to remain agile and innovative. “Large VPN providers that operate a single large-scale infrastructure have a harder time integrating new privacy-driven technologies because of compatibility, integration, and deployment issues,” he said. “The VPN industry is all about having as many servers around the world as possible, in order to ensure both availability and coverage for their customers. Acquiring smaller VPN companies and allowing them to operate independently makes sense because these infrastructures need to be agile, flexible, dynamic, and constantly integrating new privacy-drive technologies in order to allow for more privacy for their clients,” Arsene added. This argument was echoed by a representative from Hide.me, who also suggested that having separate providers allows larger VPN conglomerates to target all segments of the market. “It is more profitable to obtain users through the acquisition of smaller VPN providers than to obtain those users by using standard marketing channels. Once they have that access, they are using a smaller brand for test runs of different business models without direct harm to the mainstream brand. Usually, acquired smaller VPN providers have another price structure than the main brand, and they can cover a more significant chunk of the market,” they explained. Original post : https://thenextweb.com/tech/2019/01/23/youd-be-surprised-how-many-vpns-are-owned-by-the-same-company/ By: MATTHEW HUGHES
  8. The Zoom Privacy Backlash Is Only Getting Started A class action lawsuit. Rampant zoombombing. And as of today, two new zero-day vulnerabilities. Even before the pandemic, Zoom had a reputation for prioritizing ease of use over security and privacy.Photographer: Kena Betancur/Getty Images The popular video conferencing application Zoom has been having A Moment during the Covid-19 pandemic. But it's not all positive. As many people's professional and social lives move completely online, Zoom use has exploded. But with this boom has come added scrutiny from security and privacy researchers—and they keep finding more problems, including two fresh zero day vulnerabilities revealed Wednesday morning. The debate has underscored the inherent tension of balancing mainstream needs with robust security. Go too far in either direction, and valid criticism awaits. "Zoom has never been known as the most hardcore secure and private service, and there have certainly been some critical vulnerabilities, but in many cases there aren't a lot of other options," says security researcher Kenn White. "It's absolutely fair to put public pressure on Zoom to make things safer for regular users. But I wouldn't tell people 'don't use Zoom.' It's like everyone is driving a 1989 Geo and security folks are worrying about the air flow in a Ferrari." Zoom isn't the only video conferencing option, but displaced businesses, schools, and organizations have coalesced around it amid widespread shelter in place orders. It's free to use, has an intuitive interface, and can accommodate group video chats for up to 100 people. There's a lot to like. By contrast, Skype's group video chat feature only supports 50 participants for free, and live streaming options like Facebook Live don't have the immediacy and interactivity of putting everyone in a digital room together. Google offers multiple video chat options—maybe too many, if you're looking for one simple solution. At the same time, recent findings about Zoom's security and privacy failings have been legitimately concerning. Zoom's iOS app was quietly—and the company says accidentally—sending data to Facebook without notifying users, even if they had no Facebook account. The service pushed a fix late last week. Zoom also updated its privacy policy over the weekend after a report revealed that the old terms would have allowed the company to collect user information, including meeting content, and analyze it for targeted advertising or other marketing. And users have been creeped out by Zoom's attention tracking-feature, which lets the meeting host know if an attendee hasn't had the Zoom window in their screen's foreground for 30 seconds. During the pandemic, a type of online abuse known as Zoombombing, in which trolls abuse Zoom's default screen-sharing settings to take over meetings—often with racist messages or pornography—has also spiked. Zoom offers tools to protect against that sort of assault, specifically the option to password-protect your meeting, add a waiting room for pre-vetting attendees, and limit screen-sharing. Some paid and free speciality versions of the service, like Zoom for Education, also have different screen sharing defaults. But in general the service doesn't highlight these options in a way that would make them intuitive to enable. "It's as though, in suddenly shifting from the office to work from home, we didn't so much move the conference room into our kitchens as into the middle of the public square," says Riana Pfefferkorn, associate director of surveillance and cybersecurity at Stanford's Center for Internet and Society. "Enterprise platforms are now seeing the same abuse problems that we've long been used to seeing on Twitter, YouTube, Reddit, etc. Those platforms were inherently designed to let strangers contact other strangers—and yet they had to tack on anti-abuse features after-the-fact, too." Perhaps most jarring of all, the service has a security feature that it falsely described as being "end-to-end encrypted." Turning on the setting does strengthen the encryption on your video calls, but does not afford them the protection of being completely encrypted at all times in transit. Achieving full end-to-end encryption in group video calling is difficult; Apple memorably spent years finding a way to implement it for FaceTime. And for a service that can support so many streams on each call, it was always unlikely that Zoom had actually achieved this protection, despite its marketing claims. Zoom did not return a request for comment from WIRED about how it is handling this deluge of security and privacy findings in its product. All of this compounds with the fact that even before the pandemic, Zoom had a reputation for prioritizing ease of use over security and privacy. Notably, a researcher revealed flaws last summer about how Zoom seamlessly joined users into call links and shared their camera feeds without an initial check to let users confirm they wanted to launch the app. That means attackers could have crafted Zoom links that instantly gave them access to a user's video feed—and everything going on around them—with one click. The research also built on previous Zoom vulnerability findings. Zoom's gaffes have also started to invite even more potentially consequential scrutiny. The company is facing a class action lawsuit over the data its iOS app sent to Facebook. And the office of New York attorney general Letitia James sent a letter to the company on Monday about its mounting punch list. "While Zoom has remediated specific reported security vulnerabilities, we would like to understand whether Zoom has undertaken a broader review of its security practices," the attorney general's office wrote. Given this track record and all the commotion about Zoom security in the last few weeks, macOS security researcher Patrick Wardle says he recently got interested in poking at the Mac desktop Zoom app. Today he is disclosing two new security flaws he found during that brief analysis. "Zoom, while great from a usability point of view, clearly hasn’t been designed with security in mind," Wardle says. "I saw some researchers tweeting about strange Zoom behavior and literally within 10 seconds of looking at it myself I was just like aw, man. Granted I research this stuff, so I know what to look for. But Zoom has just had so many missteps, and that’s very indicative of a product that has not been adequately audited from a security point of view." Wardle's findings pose limited risk to users in practice, because they would first require the presence of malware on a target device. One attack focuses on a Zoom installation flow that still relies on a now-retired application programming interface from Apple. The company deprecated the API because of security concerns, but Wardle says that he sometimes still sees products using it as a lazy workaround. An attacker who has infected a victim device with malware, but hasn't yet achieved full access, could exploit Zoom's insecure install settings to gain root privileges. The other vulnerability Wardle found is more significant, though still only a local access bug. macOS offers a feature called "hardened runtime" that lets the operating system act as a sort of bouncer while programs are running and prevent code injections or other manipulations that are typically malicious. Developers can choose to add exemptions for third-party plugins if they want to have that additional functionality from an external source, but Wardle notes that such exceptions are typically a last resort, because they undermine the whole premise of "hardened runtime." Yet Zoom's macOS application has such an exemption for third-party libraries, meaning malware running on a victim's system could inject code into Zoom that's trusted and essentially link the two applications—allowing the malware to piggyback on Zoom's legitimate microphone and video access and start listening in on a victim or watching through their webcam whenever the malware wants. Though it doesn't look like researchers will stop finding flaws in Zoom any time soon, the most important takeaway for regular users is simply to think carefully about their security and privacy needs for each call they make. Zoom's security is likely sufficient for most people's general communications, but there are more protected group video chat options—like those offered by WhatsApp, FaceTime, and particularly Signal—that could be a better fit for sensitive gatherings. "The reality is that companies are going to have mistakes in their software," says Jonathan Leitschuh, a security researcher who found the webcam hijacking flaws in Zoom last summer. "The more criticism of a platform, the more secure it’s hopefully going to be. So hopefully Zoom is taking the information that they’re gaining and actually acting on it. But if you need to be secure and secret I would not recommend you have those conversations over Zoom. Use a platform that’s built for the level of security you need." Source: The Zoom Privacy Backlash Is Only Getting Started (Wired)
  9. Personal privacy matters during a pandemic — but less than it might at other times Public health weighs individual privacy against the common good Photo by JASON REDMOND/AFP via Getty Images During a disease outbreak, one of the best tools at the disposal of public health officials is low-tech detective work. When a person is diagnosed with an illness like COVID-19, the disease caused by the novel coronavirus, public health experts figure out where they’ve recently been and track down everyone they’ve been in contact with. “Sometimes it requires we know private information about a person who has been infected,” says Lisa Lee, director of the division of Scholarly Integrity and Research Compliance at Virginia Tech and former executive director of the Obama administration’s Presidential Bioethics Commission. It also can mean that they have to share some of that information, including information about someone’s health. Usually, people think about health privacy in terms of the relationship they have with their doctors and clinicians who have to keep the vast majority of information confidential — both legally and ethically. But the public health system is set up with different legal permissions and protections than a doctor’s office, and by nature, it thinks about ethics and patient privacy differently. “We think about it from the perspective of the mutual obligations we have towards each other and the need to protect well being,” says Amy Fairchild, dean and professor in the college of public health at Ohio State University. “What you’re doing is weighing the risks to the individual against the harm to the person’s contacts and the rest of the population.” Legally, there are carve-outs in health privacy laws like HIPAA that allow public health officials to get information about a person’s health without their consent. Individual privacy and the risks that can come from the disclosure of personal health information — like stigma — are still critical concerns for public health officials, Lee stresses. They aim to collect the minimum amount of information possible to achieve a public health goal. “The principle is to collect and use the least amount of data possible, because it reduces harm,” she says. The information collected is also used only for public health activities. The balance between protecting individual privacy and collecting information that is critical to the public good changes over the course of a disease’s spread. The amount of data public health officials need to collect and disclose changes as well. Right now, the COVID-19 pandemic is accelerating, and there is still a lot doctors and scientists don’t know about the disease. Collecting detailed health information is, therefore, more useful and important. That could change as the outbreak progresses, Lee says. For example, as the virus starts to circulate in the community, it might not be as important to know exactly where a sick person has been. If the virus is everywhere already, that information won’t have as much additional benefit to the community. “It depends a lot on the maturity of an epidemic,” she says. Digital tracking information is ubiquitous today, and that can make data collection easier. In Singapore, where there’s extensive surveillance, publicly available data details where people with confirmed cases of COVID-19 are and have been. The Iranian government built an app for people to check their symptoms that also included a geo-tracking feature. When deciding to use those types of tools, Lee says, the same public health principles should still apply. “Should a public health official know where a person has gone, should that be public information — it’s not different. It’s a lot easier to do that now, but it doesn’t make it any more right or less right,” she says. “Tracking where people go and who they interact with is something public health officials have been doing for centuries. It’s just easier with digital information.” In addition, just because personal information about a person and their health is important to a public health official, it doesn’t mean that information is important for the general public. It’s why, despite questioning from reporters, public health officials only gave out a limited amount of information on the people who had the first few cases of COVID-19 in the US. During the polio epidemic in the US, health departments used to publish the names of people with confirmed cases of the illness in the newspapers — a practice that would be far out of bounds today. But that didn’t stop people in the US from trying to find out information about the few cases of Ebola in the country during the 2014 outbreak. People didn’t need that information to protect themselves, though. “Having someone’s name doesn’t protect you,” Fairchild says. “That’s generally the principle of public health surveillance. There are emotional reasons that the public may want to know — but it doesn’t protect you, and shouldn’t change what you’re doing.” Health officials worry about the stigmatization of individuals or communities affected by diseases, which is why they aim to disclose only necessary information to the public. Anti-Asian racism in the US and other countries around the world spiked with the outbreak because the novel coronavirus originated in China. People who were on cruise ships with positive cases reported fielding angry phone calls from strangers when they returned home, and residents of New Rochelle, New York, which is the first containment zone in the US, said that they’re worried about their hometown being forever associated with the virus. “This kind of group-level harm is concerning,” Lee says. “That’s why we worry about group identity privacy, as well. I’m nervous and sad to see that starting to poke its head out.” People can’t expect the same level of personal health privacy during public health emergencies involving infectious diseases as they can in other elements of their health. But the actions public health officials can take, like collecting information, aren’t designed to limit privacy, Fairchild says. “It’s to protect the broader population. The principle we embrace is the principle of reciprocity. We recognize that our liberty is limited, but we are doing that for others.” Source: Personal privacy matters during a pandemic — but less than it might at other times (The Verge)
  10. Study finds Brave to be the most private browser Are you concerned about your web browser sending data back to the company that created it? A new study, Web Browser Privacy: What Do Browsers Say When They Phone Home?, looked at the six popular desktop web browsers Google Chrome, Mozilla Firefox, Microsoft Edge (Chromium-based), Apple Safari, Brave, and Yandex, to uncover what these browsers send back to the mothership. If you just want the result, the study found that used out of the box, Brave "is by far the most private of the browsers studied" followed by Chrome, Firefox and Safari. Brave is the only web browser that did not use identifiers that allowed tracking of the IP address over time and did not share details of web pages visited to backend servers. Chrome, Firefox and Safari used identifiers that are linked to the browser instance that persist over sessions and all three share web page details with backend servers via the browser's search autocomplete functionality. The study found the Chromium-based Microsoft Edge web browser and Yandex to do worse than the other browsers of the test. Both send identifiers linked to the device hardware which means that the identifier persists even across installations. Edge sends the hardware UUID to Microsoft, and Yandex transmits a "hash of the hardware serial number and Mac address". Both also appear to send web page information to servers that "appear unrelated to search autocomplete". The researcher logged all network connectivity on the devices the browsers ran on. Chrome connections using QUIC/UDP had to be blocked so that the browser would fall back to TCP. To inspect encrypted data, mitmdump was used and since leftovers can be an issue, extra care was used to delete all traces of previous installations from the systems. The test design was repeated multiple times for each browser. Start the browser from a fresh install/new user profile. Paste a URL into the address bar, press Enter, and record the user activity. Close the browser and restart, record network activity. Start the browser from a fresh install/new user profile and monitor network activity for 24 hours. Start the browser from a fresh install/new user profile, type a URL and monitor traffic. The conclusion For Brave with its default settings we did not find any use of identifiers allowing tracking of IP address over time, and no sharing of the details of web pages visited with backend servers. Chrome, Firefox and Safari all share details of web pages visited with backend servers. For all three this happens via the search autocomplete feature, which sends web addresses to backend servers in realtime as they are typed. In addition, Firefox includes identifiers in its telemetry transmissions that can potentially be used to link these over time. Telemetry can be disabled, but again is silently enabled by default. Firefox also maintains an open websocket for push notifications that is linked to a unique identifier and so potentially can also be used for tracking and which cannot be easily disabled. Safari defaults to a poor choice of start page that leaks information to multiple third parties and allows them to set cookies without any user consent. Safari otherwise made no extraneous network connections and transmitted no persistent identifiers, but allied iCloud processes did make connections containing identifiers. From a privacy perspective Microsoft Edge and Yandex are qualitatively different from the other browsers studied. Both send persistent identifiers than can be used to link requests (and associated IP address/location) to back end servers. Edge also sends the hardware UUID of the device to Microsoft and Yandex similarly transmits a hashed hardware identifier to back end servers. As far as we can tell this behaviour cannot be disabled by users. In addition to the search autocomplete functionality that shares details of web pages visited, both transmit web page information to servers that appear unrelated to search autocomplete. Closing Words The researcher analyzed the default state of the browsers and found that Brave had the most privacy friendly settings. At least some of the browsers may be configured to improve privacy by changing the default configuration, e.g. disabling autocomplete functionality. Source: Study finds Brave to be the most private browser (gHacks - Martin Brinkmann)
  11. Firefox is showing the way back to a world that’s private by default Tracking shouldn’t be the norm Illustration by Alex Castro / The Verge One of the nice things about looking at the full scope of tech news for the day is that two stories that you otherwise wouldn’t think to connect end up playing off each other perfectly. So it was today with the following pieces of news. First, Firefox is turning on a controversial new encryption methodology by default in the US. Second, Amazon is expanding its cashierless Go model into a full-blown grocery store. Here’s where I see the connection: both are about companies tracking your activities in order to gather data they could monetize later. Let’s take them one by one, starting with Amazon. You likely already know the story with Amazon Go stores: you can walk in and browse around, putting stuff in your cart as you like. Instead of checking out, you just leave. It all works because cameras track your every move and determine what you’ve picked up to charge you later. You can even pick something up, walk around the store with it, then put it back and leave and Amazon will figure that out. I know this because I’ve done it several times just to see. Nick Statt visited Amazon’s new expansion of that concept and reported an excellent story about how it works as a full grocery store. He also interviewed executives on the details of how it’s all being positioned. Notably, this is an “Amazon Go” store and not just a Whole Foods. For Amazon, they’re two distinct retail models (for now). Nick explains: That complexity inherent to the grocery market is why Amazon chose to brand its new store as a Go one, instead of choosing to bring its cashier-less Go model to an existing Whole Foods location. Amazon wants the freedom to sell people products from major brands they might find at a city bodega, a neighborhood CVS, or a Kroger store, and not just the organic and high-end ones Whole Foods sells today. That sets up Amazon to service a wider variety of customers: Go stores for the office lunch crowd, Go Grocery for the everyday residential shopper, and Whole Foods for the organic-minded and more affluent. But as you are thinking about this whole model I suspect that Amazon’s go to market strategy isn’t top of mind. Instead, there’s either an alarm klaxon going off in your head or — at the very least — a quiet voice saying this: it seems super creepy for cameras to watch your every move as you walk around a store. The follow-up thought is that the convenience of not having to check out is perhaps not worth the tradeoff for the surveillance that’s happening inside these stores. I hear the same alarm. But I also visited Amazon dot com this week and shopped for all sorts of things. If you think that the surveillance and data collection that happens at an Amazon Go retail location is creepy, well friend, it’s got nothing on what Amazon can glean from what happens on its website. Keep that tension in your mind as we turn to Mozilla and Firefox. The core thing that Mozilla is doing is trying to encrypt DNS, which stands for Domain Name Service. When you visit a website like www.theverge.com, what you’re actually visiting is a much less descriptive series of numbers. DNS is the address lookup that tells your browser that the human-readable domain — theverge.com — is located at a particular IP address. For most browsers, that address lookup isn’t encrypted, which means your internet service provider (or anybody else interested enough to snoop on it) could see what websites you’re visiting. Putting DNS behind a secure connection means that it’s less likely that snoops could see where you’re going. The decision is controversial on a number of fronts. There’s the ever-present concern about protecting kids form predators, of course, but there’s also a large group of security experts who think it’s not actually all that effective. On the whole, I think that Mozilla’s decision is fundamentally good, even with the above caveats. That’s because it shifts the Overton Window for privacy, just a bit. Whatever you think of its efficacy, the shift helps to change our default assumptions about privacy, Specifically, browsing should be fully private. If you haven’t connected the DNS story with the Amazon story on your own, let me lay it out more explicitly. I don’t think we’ve fully grappled with the idea that by default there’s not an expectation of privacy with what we do online. Think about it in other contexts: would you recoil at the idea of a company knowing what books you casually browsed at the library or bookstore? Probably you would, which is why the Amazon Go concept seems so squiggy (technical term). Until we got on the web, our browsing habits were private by default. Now, they’re not. For web browsing, I admit that there are contexts where trusted people like parents (or less trusted but still in power over your time, like the company you work for) might have a legitimate reason for gathering information on the websites you visit. But for the most part, what we choose to look at should be our business — whether that happens online, in a library, or in a grocery store. And yet the default assumption online is that certain companies get to gather and use that information. Specifically your ISP, the company that provides your web browser, or even any company that manages to drop a cookie on a website you happen to visit can all gather data on your web browsing habits. In a different world, one where we made different choices about how to construct and pay for the web in the early days, the online tradeoffs we’ve all agreed to would seem as bizarre as Amazon Go cameras tracking our every move in a grocery store. In this world, though, why have we accepted one online trade as normal while finding the trade of cameras in a store to be weird? When it comes to the online world, Mozilla’s solution may not be perfect. But it seems like a step in the right direction to me. As do all the other changes that are coming to web browsers — even Google Chrome is coming around to reducing tracking, as I’ve written about before. For the past twenty years at least, we’ve been living in a world where the default assumption is that it’s okay for companies to track our browsing habits because we get something in exchange. But if you’re squigged out by having your real-world store browsing habits tracked, sit with that feeling and ask yourself: should you feel differently about online browsing? Source: Firefox is showing the way back to a world that’s private by default (The Verge)
  12. Hey guys, here's my newest addition, which goes by the name of Debotnet. Debotnet is a free and portable tool for controlling Windows 10's many privacy-related settings and keep your personal data private. Windows 10 has raised several concerns about privacy due to the fact that it has a lot of telemetry and online features, which send your data (sensitive and not) to Microsoft and can't be disabled. With Debotnet you can choose which unwanted functions you wish to deactivate. You will be able to select from almost 70 options to tailor your Windows 10 experience to your own privacy comfort level. You will find more information on the blog post here https://www.mirinsoft.com/blog/6-take-charge-of-locking-down-your-privacy-with-debotnet >> Download Debotnet from Mirinsoft https://www.mirinsoft.com/ms-apps/debotnet >> Download Debotnet from GitHub https://github.com/Mirinsoft/Debotnet
  13. How to use Edge’s tools to protect your privacy while browsing The latest Chromium-based version offers more protection than its predecessors Illustration by Alex Castro / The Verge Version 80 of Microsoft’s Edge browser, now based on the Chromium source code, launched on January 15th, and with it came an increased focus on privacy. Edge includes tools to block both first-party cookies (used to keep you logged in or remember the items in your shopping cart) and third-party tracking cookies (used to keep track of your browsing activity). Below are instructions on how to change your settings, see what trackers are stored on your browser, and delete any cookies. We also address how Edge deals with fingerprinting, another method of tracking which identifies users by collecting details about their system configuration. Deal with trackers The new version of Edge blocks trackers by default using one of three different levels of protection. “Balanced,” which is active upon installation, blocks some third-party trackers along with any trackers designated as “malicious.” This mode takes into account sites you visit frequently and the fact that an organization may own several sites; it lowers tracking prevention for organizations you engage with regularly. “Basic” offers more relaxed control; it still blocks trackers, but only those Microsoft describes as “malicious.” You can also switch to “Strict,” which blocks most third-party trackers across sites. To change your level of protection: Click on the three dots in the top right corner of your browser window and go to “Settings,” then “Privacy and Services.” Make sure “Tracking prevention” is switched on, and then select which level you want. Adjust your tracking settings While Edge provides you with the three easy-to-choose tracking modes, you can also dive deeper to see which trackers are blocked, and make exceptions for specific sites. On the “Privacy and Services” page, look for the “Blocked trackers” link just beneath the three tracking prevention modes. Here, you can see all of the trackers Edge has blocked. Beneath that is the “Exceptions” link, where you can specify any sites where you want tracking prevention turned off. When you’re at a site, you can see an accounting of how effective your tracking prevention is by clicking on the lock symbol on the left side of the top address field. The drop-down box allows you to view the associated cookies and site permissions, allow or disable pop-ups, tweak the tracking permissions for that site, and see what trackers have been blocked. Clean up your cookies Conveniently, Edge can delete several types of data each time you close it, including browsing history, passwords, and cookies. Go to the “Clear browsing data” section of “Privacy and Service” (which can be found under the aforementioned tracking prevention levels). Click the arrow next to “Choose what to clear every time you close the browser.” Toggle on any of the data categories you’d like to be cleared each time you exit Edge. You can also manually clear your cookies and other data at any point: Next to “Clear browsing data now,” click on the button labeled “Choose what to clear.” This will open up a smaller window with several options. Select the box for “Cookies and other site data” or any other type of data you want to delete. Click “Clear now.” There are also other privacy features on the “Privacy and Services” page, including options to send a “Do Not Track” request (although the usefulness of such a request can be questionable) and to choose your search engine. Fingerprinting and ad blocking According to Microsoft, the three tracking prevention modes (especially the Strict mode) will help protect against the type of personalization that leads to fingerprinting. Edge does not block ads natively, but you can download ad-blocking extensions. Because the browser is now based on Chromium, many Chrome extensions (as well as extensions from the Microsoft Store) will work with this latest version of Edge, a distinct advantage. Source: How to use Edge’s tools to protect your privacy while browsing (The Verge)
  14. Apple Addresses iPhone 11 Location Privacy Concern Apple is rolling out a new update to its iOS operating system that addresses the location privacy issue on iPhone 11 devices that was first detailed here last month. Beta versions of iOS 13.3.1 include a new setting that lets users disable the “Ultra Wideband” feature, a short-range technology that lets iPhone 11 users share files locally with other nearby phones that support this feature. In December, KrebsOnSecurity pointed out the new iPhone 11 line queries the user’s location even when all applications and system services are individually set never to request this data. Apple initially said the company did not see any privacy concerns and that the location tracking icon (a small, upward-facing arrow to the left of the battery icon) appears for system services that do not have a switch in the iPhone’s settings menu. Apple later acknowledged the mysterious location requests were related to the inclusion of an Ultra Wideband chip in iPhone 11, Pro and Pro Max devices. The company further explained that the location information indicator appears because the device periodically checks to see whether it is being used in a handful of countries for which Apple hasn’t yet received approval to deploy Ultra Wideband. Apple also stressed it doesn’t use the UWB feature to collect user location data, and that this location checking resided “entirely on the device.” Still, it’s nice that iPhone 11 users will now have a disable the feature if they want. Spotted by journalist Brandon Butch and published on Twitter last week, the new toggle switch to turn off UWB now exists in the “Networking & Wireless” settings in beta versions of iOS 13.3.1, under Locations Services > System Services. Beta versions are released early to developers to help iron out kinks in the software, and it’s not clear yet when 13.3.1 will be released to the general public. Source: Apple Addresses iPhone 11 Location Privacy Concern (KrebsOnSecurity - Brian Krebs)
  15. PUTRAJAYA: Finance Minister Lim Guan Eng has refuted claims that the e-Tunai initiative is an attempt by the government to obtain the public's personal information. He said the government does not have to go through such lengths for the information as all can be obtained from one's identity card. "I don't see how this can be construed as an effort to get personal information. If there is any benefit from this exercise, it's that we know about consumption patterns. "The government doesn't need the information because we are not involved in business or trading. It is just us wanting to do something for the people," he said when asked to comment on such claims. The claims for the e-Tunai Rakyat initiative kicked off on Wednesday (Jan 15) with Touch ‘n Go eWallet, Boost and GrabPay selected to be the service providers for the project. The initiative will run for two months until March 14, after which any unspent money will be forfeited. Malaysians aged 18 and above and who earn less than RM100,000 annually will be eligible to receive RM30 each through any of the participating e-wallets. The RM30 can be used to purchase goods and services available through the respective e-wallet of their choice. To get the money, they will first have to register and the claims process will be subjected to eligibility checks with the National Registration Department and the Inland Revenue Board (IRB). As of 10pm on Thursday (Jan 16), some 784,000 applications were received with 672,000 approved and RM18.8mil disbursed. Source: LGE dismisses claims e-Tunai initiative an attempt to collect private data (via TheStar Online)
  16. Verizon offers no-tracking search engine, promises to protect your privacy With "OneSearch," Verizon promises no cookie tracking or personal profiling. Enlarge / Verizon's OneSearch, a privacy-focused search engine. Verizon Verizon today launched a new search engine, claiming that its "OneSearch" service will offer users more privacy than the standard options in a market dominated by Google. Verizon's actual search results are provided by Microsoft's Bing, but Verizon added several privacy-focused features—while retaining the ability to serve contextual ads. "To allow for a free search engine experience, OneSearch is an ad-supported platform," Verizon said in its announcement. "Ads will be contextual, based on factors like search keywords, not cookies or browsing history." Verizon already offered one well-known search engine, namely Yahoo's, as a result of buying Yahoo's operating business for $4.48 billion in 2017. Yahoo's search results are also provided by Bing, but they don't come with the same privacy promises. Verizon said OneSearch comes with these privacy-focused features: No cookie tracking, retargeting, or personal profiling No sharing of personal data with advertisers No storing of user search history Under the search bar is a toggle to turn on "Advanced Privacy Mode." This "encrypts your search terms and search URL, masking your search intent from third parties," Verizon says. The resulting "encrypted search results link will expire within an hour, adding another layer of privacy in the event that multiple people use the same device or if a search results link is shared with a friend," Verizon says. The Verizon search engine homepage says, "OneSearch doesn't use cookies. Period." Chrome detected that OneSearch did set one cookie on my computer, so that statement seems to be exaggerated. The EFF's Privacy Badger detected a potential tracker that's tied to the u.yimg.com domain, indicating a connection between OneSearch and Yahoo's image service. What Verizon apparently means is that it doesn't use cookies to build ad-targeting profiles. Verizon uses your IP address to determine your "general location," helping it deliver location-specific search results. Verizon said that "We only ever infer location data up to the city level of specificity for search localization purposes." Each contextual ad is based only "on each individual search that you perform," and it does not take into account "any of your previous search history or any other personal data that identifies you," Verizon says. Some anonymized information is shared with advertisers, the OneSearch privacy policy says: For example, if you search for "flower shops" we may display an advertisement/search result for one or more flower shops. We will sometimes provide your Search Query and/or your general location to advertising partners in order to provide you with advertisements/search results but the information they receive is never identifiable to you as we do not provide your IP Address to any advertising partners. Verizon’s failed media ventures OneSearch is delivered by Verizon Media, the division based largely on Verizon acquisitions Yahoo and AOL. Verizon Media has failed to compete effectively against Google and Facebook in the online advertising market, and it has suffered multiple rounds of layoffs. Verizon has pursued various media ventures outside its core telecom business, such as the Go90 video service that was unpopular and shut down after less than three years. While OneSearch is available on the Web today, Verizon said that mobile apps for Android and iOS will come later this month. Verizon said that OneSearch is initially available in North America and will be available in countries outside North America "soon." How OneSearch works The OneSearch privacy notice offers a breakdown of what happens after you enter a search query. The process involves Verizon, Microsoft's Bing, and other unnamed companies. Here's a summary of how it works: Your IP address, search query, and user agent are transferred over HTTPS to Verizon servers. The user agent generally includes data about the browser, operating system, and type of device and app you're using to make the search. Verizon derives your city-level location data from your IP address and then sends your IP address, user agent, search query, and location data to Microsoft's Bing "so that the actual search request can be made through their search engine." Bing provides the search results to Verizon, and then Verizon's automated process "works with our Search Partners to provide you with contextual advertisements and/or search results." Verizon describes the "search partners" vaguely as "certain companies providing search result optimization input" and says they "are not provided with your personal data." Verizon will store your IP address for four days "for the purpose of network traffic protection" and then permanently delete the IP address. Bing will continue to store the IP address, search query, and user agent, also for network traffic protection. After four days, Bing "obfuscates the IP address." Additionally, Verizon says it stores your IP address, search query, and user agent "in different servers in such a way that they are not able to be connected." Do you trust Verizon? Verizon is an unlikely candidate to launch a product whose entire pitch is based on privacy. In March 2016, Verizon agreed to pay a $1.35 million fine and give users more control over "supercookies" that were used to identify customers to deliver targeted ads. Verizon's use of the supercookies without properly notifying users violated a net neutrality rule that required Internet providers to disclose accurate information about network management practices to consumers, the Federal Communications Commission said at the time. Verizon was also one of several major carriers that sold its mobile customers' location information to third-party data brokers, but Verizon promised to stop the practice in 2018 after a security problem leaked the real-time location of US cell phone users. T-Mobile, Sprint, and AT&T apparently continued the sales longer than Verizon did. All four carriers were hit with class-action lawsuits accusing them of violating federal law by selling their customers' real-time location data to third parties. But a US District Court judge in Maryland granted the carriers' motions to compel arbitration, forcing customers to arbitrate the disputes outside of court. Despite Verizon's apparent devotion to privacy with OneSearch, the company has opposed government regulations that would force carriers to protect customer privacy. For example, Verizon opposed Obama-era FCC rules that would have required ISPs to obtain customers' opt-in consent before using, sharing, or selling Web-browsing history, arguing that "personalized advertising benefits consumers." That opt-in rule was blocked by the Republican-controlled Congress and President Trump before it took effect. If you're looking for a privacy-focused search engine, Verizon isn't your only option. DuckDuckGo provides a search engine and promises not to collect or share any of its users' personal information. There's also Startpage, which uses Google search results but removes trackers and logs in order to make search queries private. Source: Verizon offers no-tracking search engine, promises to protect your privacy (Ars Technica)
  17. In 2011, Apple unveiled its first iPhone with artificial intelligence, a personal assistant named Siri that could answer questions and help keep track of our daily lives. The AI revolution had begun, and it gave way to higher resolution cameras on phones, such as the then-new iPhone 4S, microphones and cameras in the home, everything from connected speakers, security devices, computers and even showers and sinks. By the end of the decade, we were carrying and or living with devices that are capable of tracking our every movement. Counties and states are selling our personal information to data brokers to resell it back to us, in the form of "people search engines." Facebook and Google have refined their tracking skills, in the pursuit of selling targeted advertising to marketers, that many people believe they are listening to us at all times. They are that good at serving up ads based on our interests, whether we want it or not. Goodbye privacy! The "10s" were the decade in which our privacy went away if we were connected to the Internet, which means most of us. Apple went on a crusade to protect our privacy, which could be argued as a competitive advantage over rivals, and groups ranging from the Electronic Frontier Foundation (EFF) and the Privacy Coalition began speaking out. In Europe, major changes were made to privacy laws on behalf of consumers, and a new California law goes into effect in January that will make it harder for companies to take our data and resell it. Or so the language says. As more people became aware of privacy issues, and companies like Facebook announced several security breaches of our data, the bottom line is that the social network has more users and makes more money. Ditto for Google. "The biggest difference between then and now, is that people didn't really think about what companies were collecting on us," says Chris Jordan, CEO of Fluency, a data analysis company. "We weren't worried about privacy. Now we are." Not that it wasn't brought to our attention. In 2011, then hacker/security researcher Samy Kamkar discovered that the iPhone, Android and (the then still operating) Windows Phone mobile devices were sending back GPS information to their makers, even when the location services option were turned off, and made his findings public. Bluetooth is always on, despite the settings In 2019, Kamkar demonstrated for a USA TODAY reporter how little has changed. From the general settings of the iPhone, turn off the blue bluetooth icon in the Control Panel, and then go to the Bluetooth section in General, and Bluetooth is still running. "When you disable, you're not disconnecting the software that continues to broadcast the information," says Kamkar, who is now the chief security officer and co-founder of Openpath, a company that aims to replace the office badge with app-based tokens for entrances. "I can still get your name and phone number simply by being in the vicinity," and picking up the bluetooth signal. And there are more sensors reading you than ever before. Google now tracks your every movement, if you're a user of the Google Maps smartphone app, and records a public history of your whereabouts, whether or not the app is even open and turned on. "We knew we were being tracked on phones, but didn't realize that the companies were using the data in ways most people don't approve of, or even realize it was capable of," says Danny O'Brien, director of strategy for the EFF. Privacy concerns went from something people were "benign about, to genuinely anxious," he adds. Cover your webcam and your phone "Most people cover their webcam cameras, but don't think about the phones," says Kamkar. Thomas does. She brings her phone to the bathroom, but always places a lens cloth around the cameras. "I don't want some stranger watching me change my clothes," she says. "I cover everything." From the bathroom to the living room, the major innovation in TVs over the decade has been the smart TV, which eliminates the need for an external streaming device to bring internet programming from the likes of Netflix, Hulu and Disney+ direct to the set, without having to change the HDMI input settings. The sets themselves got so cheap, resellers are practically giving them away, with many Black Friday deals offering 40- and 50-inch models in the $200 and $300 range. These same size sets were selling for around $1,000 in 2010. How to make money selling TVs — resell our data That's the good news. The bad: to turn a profit, manufacturers now make up the difference by selling your viewing habits to data brokers, letting them know what shows and networks you watch, your demographic and real estate locations and more. Samsung has a TV with a built-in video camera, to enable video chat, but it also makes the TV even more susceptible for hacking. The onus is on the consumer to protect their smart devices with strong passwords, especially for the home network. Which brings us to the ever-present security doorbell cameras that are increasingly showing up in people's homes. Ring, a company owned by Amazon, has come under attack by privacy groups for being allegedly easy to hack, not just for the doorbell product. The group Fight for the Future put out its own product warning, saying Ring cameras are not safe. Recently, several families have reported that their Ring cameras were hacked. In response, Ring said its owners needed to use stronger passwords. Each time I've watched this video it's given me chills. A Desoto County mother shared this Ring video with me. Four days after the camera was installed in her daughters' room she says someone hacked the camera & began talking to her 8-year-old daughter. Meanwhile, as we close off the decade, yes, people are fighting back against the privacy invasion, politicians have taken up the cause, with a vow to break up big tech, but what will it all look like in another 10 years. There's artificial intelligence and facial recognition to add to all the tracking that's going on now. The age of "Minority Report," the sci-fi novel and film where government could pre-determine what you were going to do with visions of the future "will happen," says Kamkar. "It's just a question of how far we'll let it go." Source
  18. Apple is crashing CES officially this year. What you need to know Apple is attending CES for the first time in decades. The company's Senior Director of Privacy, Jane Horvath, will attend a privacy roundtable. The event will focus on consumer privacy, how to build it at scale, and how regulation will affect it. After decades without any attendance, Apple is making an official return to the Las Vegas CES technology conference in 2020. Reported by Bloomberg, the company is attending to pitch a new product but to instead talk about consumer privacy. Jane Horvath, Apple's Senior Director of Privacy, will be speaking at a "Chief Privacy Officer Roundtable: What Do Consumers Want?" event at the conference which is set to happen on January 7, according to the CES schedule. The roundtable will also include representatives from Facebook and the Federal Trade Commission. The conference describes the event as a discussion between the invitees to answer a number of questions concerning consumer privacy: Save 40% and get three months of wireless service for just $45 Privacy is now a strategic imperative for all consumer businesses. "The future is private" (Facebook); "Privacy is a human right" (Apple); and "a more private web" (Google). How do companies build privacy at scale? Will regulation be a fragmented patchwork? Most importantly, what do consumers want? It will be moderated by Rajeev Chand, Partner and Head of Research at Wing Venture Capital. The rest of the panel will be made up of representatives from Apple, Facebook, Proctor & Gamble, and the FTC. Below is a list of who will be attending the roundtable, their role, and the company they are representing: Rajeev Chand, Partner and Head of Research, Wing Venture Capital Erin Egan, VP, Public Policy and Chief Privacy Offer for Policy, Facebook Jane Horvath, Senior Director, Global Privacy, Apple Susan Shook, Global Privacy Officer, The Procter & Gamble Company Rebecca Slaughter, Commissioner, Federal Trade Commission Apple had unofficially showed up at CES last year when it hung enormous billboards across Las Vegas during the conference that touted the company's focus on privacy. The billboard featured the back of an iPhone X with the words "what happens on your iPhone, stays on your iPhone." According to Bloomberg, the roundtable talk will mark the first time since 1992 that Apple formally attends the conference. Source
  19. The recent firing of a Google employee demonstrates how you relinquish your privacy—and private data, including personal photos—when you put work accounts on your personal device. The Bill of Rights covers only what the government can do to you. Unless you work for the government, many of your rights to free speech and freedom from search and seizure stop when you walk in, or log in, to your job. “If you’re on your employer’s communications equipment, you’ve got virtually no privacy in theory and absolutely none in practice,” says Lew Maltby, head of the National Workrights Institute. The lack of workplace digital privacy has become a hot topic with the recent firing of four Google employees for what Google says were violations such as unauthorized accessing of company documents and the workers say was retaliation for labor organizing or criticizing company policies. One of them, Rebecca Rivers, recounts how her personal Android phone went blank when she learned that she’d been placed on administrative leave in early November. (Google subsequently fired Rivers.) “At nearly the exact same time, my personal phone was either corrupted or wiped,” she said at a Google worker rally in November. The loss was especially painful for Rivers, who is transgender. “Everything on my phone that was not backed up to the cloud was gone, including four months of my transition timeline photos, and I will never get those back,” she said, her voice quavering. How did this happen? Likely through an Android OS feature called a work profile, which allows employers to run work-related apps that the employer can access and manage remotely. Apple iOS has a similar capability called MDM, mobile device management, in which work apps run in a virtual “container” separate from personal apps. Various companies make MDM applications with varying levels of monitoring capabilities. There are many legitimate reasons why a company might want to use this tech: It allows them to implement security measures for protecting company data in email and other apps that run in the separate work profile or container, for instance. They can easily install, uninstall, and update work apps without you having to bring the device in. But they can also spy on you, or wipe out all your data—whether deliberately or negligently. That’s why mixing work networks and personal devices is a bad idea. My smartphone is your smartphone All modern phones have GPS capability. With a work profile or MDM toehold in your phone, an employer could install an app to track everywhere you go, as Owen Williams at OneZero points out. He gives the example of MDM maker Hexnode, which goes into great detail on how it can track device location at all times.''' Williams also notes that a company may require your phone to connect to the internet through its encrypted virtual private network. This security measure makes sense for business, but it means that all of your data, even personal data, may be passing through the company’s network. That makes the data fair game for the company to look at, since there is simply no law or legal precedent to stop it. “That’s not really different from using your company’s desktop computer to send a personal email from your cubicle,” says attorney and security expert Frederick Lane. “If you send unencrypted personal data across a network owned and controlled by your employer, then you should assume that it can be captured and stored.” Rivers recently tweeted a line of her employment contract that spells this out: “I have no reasonable expectation of privacy in any Google Property or in any other documents, equipment, or systems used to conduct the business of Google.” I asked Google about this policy. A spokeswoman said that it should not come as a surprise and is standard practice at large companies. A notice of the privacy policies also pops up when the phone profile is installed, she said. What happens if you lose your data? What Rivers hadn’t expected was losing personal data on her own device. But this is increasingly common, says Maltby, who calls it a bigger danger than being spied on. “They’re wiping your personal device with the goal of getting rid of the company data, but when you wipe the phone, you wipe everything,” he says. Google told me that a suspended employee may lose personal data because they stored it in a work account, and they can ask Google to retrieve it for them. It’s unclear exactly what happened to Rivers’s phone, or whether Google has a backup. But companies often completely wipe employees’ own phones without providing a way to back up personal information, Maltby says. “It’s not that they want to cause you trouble,” he says of employers. But “they would have to spend a little time and money to set up a system that would protect your privacy for the personal information that happens at work. And they don’t bother to do it.” Worker advocates such as Maltby believe that total wiping of phones should be illegal under a law called the Computer Fraud and Abuse Act. The CFAA basically prohibits unauthorized access to a computing device, such as stealing data or planting malware. But advocates have struggled to find a legal case that can set a precedent for employee cellphones. “The courts insist on seeing tangible monetary damages, and usually there aren’t any,” Maltby says. Of course, losing personal data, like photos documenting key moments of life, is so painful precisely because their value is intangible. There’s also no way to put a monetary value on the hassle of carrying a second phone, or of fighting an employer that’s reluctant to pay for one. But placed side by side, securing your privacy is probably worth more than enduring some inconvenience. Source
  20. After years of freely, or unknowingly, giving up their data, consumers are becoming wary The next decade could see ground rules set for an industry that’s used to making the rules (or not) as it goes. “I swear my phone is listening to me,” said everyone with a smartphone. And they’re not just talking about Siri. Targeted ads and friends’ posts regularly pop up on Instagram and Facebook in the midst of our conversations about those very products and people. Coincidence? No one has been able to conclusively prove otherwise, but we do know Alexa is recording us on our Echo Dots. Technology is pushing the limits in how marketers not only meet, but anticipate, the growing demands for a frictionless customer experience. With the convergence of Big Tech, machine learning, enhanced targeting and personalization, we arguably stand at the precipice where either personalization or privacy will hold sway. The pushback has already begun, with the 2020s poised to be the decade that sets some ground rules for an industry that’s used to making the rules (or not) as it goes. “If you think about healthcare, if you think about financial services, every other industry with high volumes of data collection—they are already regulated, but it hasn’t come to roost for those of us in the marketing and advertising space,” noted Fatemeh Khatibloo, a Forrester analyst with expertise in the privacy/personalization paradox. “And clearly what we’re hearing from consumers and regulators is that it’s time to put those guardrails up.” Indeed, the EU enacted the General Data Protection Regulation in 2016, giving citizens more control over their data, shutting down some businesses overnight, and now this privacy trend has reached our shores. Maine and Nevada quietly enacted their own protections earlier this year, and the California Consumer Privacy Act, which allows residents not only to opt out of having their data collected, shared or used, but to sue businesses for data breaches, takes effect on Jan. 1. Similar bills are in the works for Illinois, Maryland, Massachusetts, New York, Rhode Island and Texas. In time, depending on which way the political wind blows, we may see a comprehensive law protecting data and privacy enacted by Congress. The imminent arrival of 5G, with its blazingly fast broadband speeds and improved mobile networks, will only add to the urgency of the privacy-and-security conversation. It’s obvious why 5G would appeal to consumers, but will they be as thrilled with the potential for marketers (and the government) to have even more access to their every move? For brands, the coming decade promises extraordinary technological advances that will unleash new and exciting ways to enhance their value. But as we know all too well, technology can get away from us and into hands of bad actors. With innovation being a moving target, and the consumers’ thirst for the next, it will be incumbent on marketers and regulators to be able to strike a balance between access and protection. Source
  21. This year, look for a tech startup that solves the digital consumer privacy crisis. Black Friday and Cyber Monday made clear that the online-offline divide in consumers' minds has almost disappeared. Among the big winners for sales in 2019 will be a device that is perhaps the best physical representation of that diminishing online-offline divide: the digital assistant. The main contenders for consumer dollars this year come by way of Amazon, Google, and Apple. Amazon Echo smart home products have been among the company's most popular items for a while now, but they hit new records in the recent four-day stretch from Black Friday to Cyber Monday. Internet connectivity continues its march to omnipresence in everyday consumer goods. Televisions feature built-in internet functionality, and the FBI just released a warning about them. A number of the newer TVs also have built-in cameras. In some cases, the cameras are used for facial recognition so the TV knows who is watching and can suggest programming appropriately. There are also devices coming to market that allow you to video chat with Grandma in 42" glory. Beyond the risk that your TV manufacturer and app developers may be listening to and watching you, that television can also be a gateway for hackers to come into your home. A bad cyber actor may not be able to access your locked-down computer directly, but it is possible that your unsecured TV can give him or her an easy way in the backdoor through your router. Hackers can also take control of your unsecured TV. At the low end of the risk spectrum, they can change channels, play with the volume, and show your kids inappropriate videos. In a worst-case scenario, they can turn on your bedroom TV's camera and microphone and silently cyberstalk you. The conveniences afforded by all this new connected technology are great, but it's important to bear in mind that it also has its downside. Even basic home goods like doorbells and light bulbs are commonly being sold with Wi-Fi connectivity and the ability to integrate into Google Home-, Siri-, or Alexa-enabled networks. These devices don't just talk to one another. They're also providing the companies that manufactured them with a gold mine of data about how they're being used--and, increasingly, who is using them. It's not just IoT gadgets. Tech companies are busy these days trying to weave their way into yourwallet, your entertainment, and your health, all the while mining as much data as possible to leverage into other markets and industries. This has an air of inevitability about it because the right entrepreneur has not yet had the right aha! moment to make it stop being an issue. That said, cracks in the current personal information smash-and-grab approach to consumer data are beginning to appear, and consumers are becoming increasingly wary of how their data is being collected and used as well as who has access to it. Break Out the Torches and Pitchforks If a consumer revolt sounds overly optimistic, consider the uproar earlier this year over revelations that smart home speakers were eavesdropping consistently and sometimes indiscriminately on consumers, and the resulting semi-apologies issued by Apple, Amazon, and Google. Or look at the ongoing civil rights concerns regarding Amazon's Ring surveillance cameras, or the recent lawsuit against TikTok for allegedly offloading user data to China, or the reports of customers abandoning their Fitbits after the company was acquired by Google. The message seems clear to me. Consumers may enjoy the convenience and easy access to the internet, but more and more they bristle at the lack of transparency when it comes to the way their data is being handled and used by third parties, and the seeming inevitability that it will wind up on an unsecured database for any and all to see. While the fantasy of consumers uninstalling and unplugging en masse is common among a small community of sentient eels indigenous to the Malarkey Marshes of Loon Lake, there remains a business opportunity for the larger online community. Will the Genius of Loon Lake Please Stand Up? The effort to create a more privacy- and security-centric internet experience for consumers has largely been led by nonprofit organizations. World Wide Web inventor Tim Berners-Lee has been publicly discussing plans to create a follow-up with the aim of reverting to its original ideals of an open and cooperative global network with built-in privacy protections. Meanwhile, the nonprofit Mozilla organization has revamped its Firefox browser to block several types of ad trackers by default and provide greater security for saved passwords and account information, in addition to publishing an annual guide to score internet-connected devices for their relative privacy friendliness and security. Wikipedia founder Jimmy Wales announced in November a service meant to provide an alternative to Twitter and Facebook reliant on user donations rather than the other social platforms' often Orwellian ad tracking software. Without a user base or killer app to drive adoption, Berners-Lee's new web has been in the works for years, and Wales's idea is a rehashing of a similar project called WikiTribune that also never managed to find its footing. Firefox is a quality browser, but its market share pales next to Google Chrome's. Thus far, nonprofit-driven alternatives have found no lure to drive consumer adoption. The next stage of privacy-centric development may need to have a profit motive to make inroads into the privacy protocols and proxies that dominate apps and devices. It can't be merely self-sustaining, but rather must be compelling for users, developers, and engineers. One such company, Nullafi, has the right idea: anonymizing and individualizing a user's most common digital identifier by creating email burners that redirect to the user's private account. (Full disclosure: I'm an investor.) We need to see more of this kind of development, and we need to see it get adopted. The current large-scale investment in cybersecurity proves there's a market in our post-Equifax-breach world where awareness of data vulnerability and the possibility of getting hacked have hit critical mass. The time for the unicorns to arrive is now. Source
  22. The privacy-focussed search engine, Startpage, has announced a news tab for search results. Currently, Startpage gives web, image, and video but the addition of a news tab will bring its feature parity closer to that of competitors, making it easier for people to switch to. As with its other search features, the news results will not be influenced by any tracking, this ensures users see a more balanced list of results. Many search engines use prior searches and browsing history to display results. While these companies say that the results are more relevant, Startpage and other privacy-oriented search engines like DuckDuckGo argue that these “filter bubbles” are more like traps where some results are hidden from you. Instead of filtering news results based on your browsing history, Startpage will filter results based on the time and date that they were published, this way you’ll be able to keep up with all the latest developments as they evolve. Commenting on the new feature, Startpage said in an e-mail: If you're not familiar with Startpage, it is one of the search engines that has gained popularity in recent years following the revelations made by Edward Snowden surrounding state surveillance. Startpage sets itself apart from the competition by storing no IP addresses, no personal user data, and no tracking cookies. Additionally, users can view results incognito with the Anonymous View tool. Source: Privacy-oriented Startpage search engine adds News tab to results (via Neowin)
  23. The Royal Malaysia Police (PDRM) “are allowed to inspect mobile phones to ensure there are no obscene, offensive, or communication threatening the security of the people and nation,” the Dewan Rakyat was told yesterday. According to a media report from MalaysiaKini, PDRM also have the right to, including “phone bugging” or “tapping” to ensure investigations could be carried out in cases involving security. The article quoted Deputy Home Minister Mohd Azis Jamman who was responding to questions from YB Chan Ming Kai (PH-Alor Star). The Deputy Home Minister also said that “the public should be aware of their rights during a random check, including requesting the identity of the police officer conducting the search for record purposes, in case there is a breach of the standard operating procedures (SOP),”. However, details of the “Police SOP” were not revealed. In 2014, the then Minister in the Prime Minister’s Department Nancy Shukri said that law enforcers (such as PDRM) in the country are empowered under five different laws to tap (wiretap) any communications done by suspects of criminal investigations. This would include intercepting, confiscating and opening any package sent via post, intercepting any messages sent or received through any means of telecommunication (voice/SMS/Internet); intercept, listen to and record any conversations (phone) over telecommunications networks. The provisions are found under Section 116C of the Criminal Procedure Code, and also under Section 27A of the Dangerous Drugs Act 1952, Section 11 of the Kidnapping Act 1961, Section 43 of the Malaysian Anti Corruption Commission Act 2009, and Section 6 of the Security Offences (Special Measures) Act 2012 (SOSMA). According to Malaysia-Today in a 2013 article, Section 6 (SOSMA) gives the Public Prosecutor the power to authorise any police officer to intercept any postal article, as well as any message or conversation being transmitted by any means at all, if he or she deems it to contain information relating to a “security offence”. It also gives the Public prosecutor the power to similarly require a communications service provider like telecommunications companies (including Maxis, Celcom Axiata, Telekom Malaysia, Digi, U Mobile, Yes4G and others) to intercept and retain a specified communication, if he or she considers that it is likely to contain any information related to “the communication of a security offence.” Additionally, it vests the Public Prosecutor with the power to authorise a police officer (PDRM) to enter any premises to install any device “for the interception and retention of a specified communication or communications.” The Malaysia-Today article said such a scope of what the government can do in terms of intercepting people’s messages is troubling – at least to those who understand its implication. In particular, there are those who are anxious that it can be used to tap on detractors and political opponents. “Due to the vagueness and broadness of the ground for executing interception, this provision is surely open to abuse especially against political dissent,” said Bukit Mertajam MP Steven Sim at the time. Stressing that the act does not provide any guidelines on the “interception”, he added: “The government can legally ‘bug’ any private communication using any method, including through trespassing to implement the bugging device and there is not stipulated time frame such invasion of privacy is allowed”. “If that is not enough, service providers such as telcos and internet service providers are compelled by Section 6(2)(a),” At the moment, the Malaysian Government has not revealed on the number of people/communication it has tapped/intercept in the past decade. [Update, 20 November 2019]: Deputy Home Minister Datuk Mohd Azis Jamman released a statement saying that the Royal Malaysia Police (PDRM) can confiscate the cell phones of the suspected and those involved in any ongoing investigation, and not conduct random checks on the public. Source: Malaysia Police (PDRM) can Intercept your Voice Calls/SMS, check your Handphone (via Malaysian Wireless) p/s: Deputy Home Minster later clarified that phone checking can only be done, if individuals are suspected of committing wrongdoings under the following acts (a warrant will be required as part of SOP): Penal Code (Act 574) Section 233 under the Communications and Multimedia Act (Act 588) Sedition Act 1948 (Act 15) Security Offences (Special Measures) 2012 Act (747) Anti-Trafficking in Persons and Anti-Smuggling of Migrants 2007 (Act 670) Prevention of Terrorism Act 2015 (Act 769). The people can report to Standard Compliance Department (Jabatan Integriti dan Pematuhan Standard or JIPS) if the enforcement officers randomly checking phones without a proper warrant and/or SOP. Source: Home Ministry: PDRM can only check phones belonging to suspects and individuals involved in ongoing investigations (via The Star Online) p/s 2: The original title of the news is added with "(update: Home Minister says cannot)" to reflect that although previous news quoted that Police can check (and intercept) the public's devices randomly, but with a new clarification from Home Ministry (stated on p/s part), the police cannot check (and intercept) the devices without a proper warrant that reflects any one (or more) from 6 acts listed.
  24. Intelligence agencies stopped the practice last year American intelligence agencies quietly stopped the warrantless collection of US phone location data last year, according to a letter from the Office of the Director of National Intelligence released today. Last year, in a landmark decision, the Supreme Court ruled against authorities looking to search through electronic location data without a warrant. Citing the ruling, Sen. Ron Wyden (D-OR), a privacy hawk in Congress, wrote a letter to then-Director of National Intelligence Dan Coats asking how agencies like the National Security Agency would apply the court’s decision. In a response to Wyden released today, a representative for the office said intelligence agencies have already stopped the practice of collecting US location data without a warrant. Previously, agencies collected that information through surveillance powers granted under the Patriot Act. But since the Supreme Court’s decision, the agencies have stopped the practice, and they now back up those searches through a warrant, under the legal standard of probable cause. In the letter to Wyden, the intelligence community official writes that the Supreme Court’s decision presented “significant constitutional and statutory issues,” but would not explicitly rule out using the tools in the future. The letter says that “neither the Department of Justice nor the Intelligence Community has reached a legal conclusion” on the matter. Next month, provisions of the Patriot Act — specifically, Section 215 — are set to expire, raising questions about potential reforms. “Now that Congress is considering reauthorizing Section 215, it needs to write a prohibition on warrantless geolocation collection into black-letter law,” Wyden said in a statement. “As the past year has shown, Americans don’t need to choose between liberty and security — Congress should reform Section 215 to ensure we have both.” Source: The NSA has stopped collecting location data from US cellphones without a warrant (via The Verge)
  25. A Facebook VP says the company is looking into it Facebook might have another security problem on its hands, as some people have reported on Twitter that Facebook’s iOS app appears to be activating the camera in the background of the app without their knowledge. Facebook says it’s looking into what’s happening. There are a couple ways that this has been found to happen. One person found that the camera UI for Facebook Stories briefly appeared behind a video when they flipped their phone from portrait to landscape. Then, when they flipped it back, the app opened directly to the Stories camera. You can see it in action here (via CNET😞 It’s also been reported that when you view a photo on the app and just barely drag it down, it’s possible to see an active camera viewfinder on the left side of the screen, as shown in a tweet by web designer Joshua Maddux: Maddux says he could reproduce the issue across five different iPhones, which were all apparently running iOS 13.2.2, but he reportedly couldn’t reproduce it on iPhones running iOS 12. Others reported they were able to replicate the issue in replies to Maddux’s tweet. CNET and The Next Web said they were able to see the partial camera viewfinder as well, and The Next Web noted that it was only possible if you’ve explicitly given the Facebook app access to the camera. In my own attempts, I couldn’t reproduce the issue on my iPhone 11 Pro running iOS 13.2.2. Guy Rosen, Facebook’s VP of integrity, replied to Maddux this morning to say that the issue he identified “sounds like a bug” and that the company is looking into it. With the second method, the way the camera viewfinder is just peeking out from the left side of the screen suggests that the issue could be a buggy activation of the feature in the app that lets you swipe from your home feed to get to the camera. (Though I can’t get this to work, either.) I don’t know what might be going on with the first method — and with either, it doesn’t appear that the camera is taking any photos or actively recording anything, based on the footage I’ve seen. But regardless of what’s going on, unexpectedly seeing a camera viewfinder in an app is never a good thing. People already worry about the myth that Facebook is listening in to our conversations. A hidden camera viewfinder in its app, even if it’s purely accidental, might stoke fears that the company is secretly recording everything we do. Hopefully Facebook fixes the issues soon. And you might want to revoke the Facebook app’s camera access in the meantime, just to be safe. Source: Facebook’s iOS app might be opening the camera in the background without your knowledge (via The Verge) p/s: The news was posted under Security & Privacy News, instead of Mobile News as this news talks about privacy issue on Facebook's iOS app with regards to the camera bug.
×
×
  • Create New...