Ring isn't just a product that allows users to surveil their neighbors. The company also uses it to surveil its customers.An investigation by EFF of the Ring doorbell app for Android found it to be packed with third-party trackers sending out a plethora of customers’ personally identifiable...
RT @lopp: This is another reason why Aurora Store is better than the Google Play store - it tells you what all of the trackers in each app are via Exodus Privacy. This is what their report for the Ring app shows:
This is another reason why Aurora Store is better than the Google Play store - it tells you what all of the trackers in each app are via Exodus Privacy. This is what their report for the Ring app shows:
So is there any way in to use such services while blocking these trackers? Of course I'm not looking at installing Ring stuff in my home at all. It seems like an impossible battle in the future and with proliferation of connected tools/gadgets you'll just have to forego a LOT of benefit by going with the offline versions. Not good.
The ubiquiti are all PoE? How does an external cat6 line get hooked up and weather proofed?
I'm considering the dream machine pro and a poe injector for the APs but would have to get the 16 port switch if it were to power cameras as well.
I was hoping for a nice battery powered option.
> The ubiquiti are all PoE? How does an external cat6 line get hooked up and weather proofed?
Yes CAT6 will work, for example I use [AZ884028704/10 | CS44Z3 BLU C6A 4/23 U/UTP RL 305M (RCM)](https://www.commscope.com/catalog/cables/product_details.aspx?id=89164)
Note it says:
> IEEE 802.3bt (Type 4) for the safe delivery of power over LAN cable when installed according to ISO/IEC 14763-2, CENELEC EN 50174-1, CENELEC EN 50174-2 or TIA TSB-184-A
The [TOUGHCable has an ESD drain wire](https://invidio.us/watch?v=plJcn0IaPZg) that can be soldered to the prong on the metal connectors, though it's not strictly necessary if your camera/cable is sheltered. I tend to put all my cabling in a PVC conduit.
I would however use this cable if the camera is exposed to the elements. G4 cameras are [IP67 proof](https://en.wikipedia.org/wiki/IP_Code).
Interesting this article only talks about the Android app. Does that mean the Ring app on IOS does not exhibit these issues? I'd be curious to know if this is a Ring problem or more an Android/IOS issue.
It’s a Ring problem. This is not android doing this, it is the Ring app. It’s just easier to mitm the Android app to decipher what it’s doing.
Ring is making money off selling this information that has no real connection the service it purports to provide.
I can’t think of any reason why they would exclude iOS from this data collection and tracking.
But regardless of whether or does or not, why would anyone trust them after this?
They're not "giving" the data to these companies, that is incredibly misleading! They are sending data to well-known analytics providers where AFAIK only Ring themselves will be able to use the data, it is likely for product improvement and understanding their user base to make improvements to their products. I use Mixpanel for my app, and as far as I know, Mixpanel don't have access to that data (As in, they can't sell it on to other manufacturers or use it for their own gain) so to speak.
Ahh, good old Android, doing what it does best, breach your privacy. I’ve got HomeAssistant with all things connecting locally and on their own VLAN with no internet access. Track that I dare you. My firewall won’t let you.
For those that want to avoid such sillyness, Reolink sells relatively cheap cameras. Rated for out doors, power over ethernet, $50-$60 per camera, and includes a microphone.
They can easily be connected to zone minder, or any software that can take a rtsp:// URL. Even handles motion detection for specific areas of camera, so you can include the driveway but exclude the sidewalk. You can have it email or upload videos... without access to any reolink related cloud.
So you could easily put them in production with zero network access and let something you control notify you with images or video clips for any activity.
There's numerous cheap products, but the reolink seems to be one of the better ones that play well with others and doesn't require any WAN network access.
Ubiquiti and Axis also have some very nice products, but generally are more expensive.
Yeah, there's an entire industry of location data that flows through almost every popular app. Those that people typically allow to use GPS fetch a premium from tracking companies.
And lo and behold, Abode's app does too:
The Sites and mobile applications may serve third party advertisements and may use third party analytics service providers to provide us with statistical information about your use of the Sites and mobile applications.
I’m fine with Abode tracking me...what makes me mad is that they’re working with companies like Facebook and other marketing companies to sell ads.
I get that they will be sharing my info potentially with my police/fire departments in an emergency, and have to share information with whatever third party dispatch service they use, those are all necessary parts of a home security system...but it feels like this kind of tracking is going too far.
Write to your representatives and senators. Abode is one of the least offenders, and who knows who they are working with. They're just part of a completely unregulated world of personal data that the U.S. consumer doesn't seem to understand.
FWIW, I didn’t see anything in there about iOS. From the article, “An investigation by EFF of the Ring doorbell app for Android found it to be packed with third-party trackers sending out a plethora of customers’ personally identifiable information (PII).”
Sadly, it doesn’t change anything.
Apple can’t stop you from including trackers or anything inside a HomeKit app, nor stop you from relying on the internet (older HomeKit requirements were that all functions can be accessed from the Home app, without needing a 3rd party app nor internet, but they had to relax these as companies refused to support HomeKit).
For me, the easy rule of thumb:
- If I can add the device to HomeKit, not connect it to internet nor need to install an app to use it = safe = I buy
- If it requires internet or a specific app to function = spyware = absolutely no buy
Like BeJeezus said, it’s possible to have WiFi without internet.
But also, WiFi (more specifically IP) and BT are only necessary for devices that connect to the Home App, you can still use other protocols (X10, Zigbee, IR, etc) for other devices and still get HomeKit certification as long as you have something (like a hub) that can connect to the Home App via BT or WiFi.
Sadly, even a Bluetooth device can require internet if say, you’re using a 3rd party app to control it.
For example, BT smart locks and cameras. Their apps do send a lot of data and commands over the internet and there is no way to disable that.
> it’s going to need WiFi and so accordingly internet
One can have WiFi without internet.
When the Internet is down in my house, the Wi-Fi is definitely still up and there is no problem watching movies from the Synology over wifi or using HomeKit wifi devices or printing via wifi or...
Yeah it's just outside access you won't be getting. I'm positive there's a way to do it, I'm just not necessarily sure how (I'm thinking VPN to a computer at the location with access to the network. Not totally secure but not open access)
Sadly these are old requirements.
All of these connected doorbells and cameras do require 3rd party apps that are connected in order to function (these apps use APIs on the company’s servers to function and track status).
I’m a heavy user of Homebridge and DIY devices (open source, 3D printed, Raspberry Pi connected ones), and can hardly find anything on the market that is safe (privacy wise and function wise, in case of a blackout/disaster).
The only commercial stuff I can trust a bit are from EVE, although you can’t have any history without their app (on this not, I wish HomeKit was like HealthKit, and saved history of basic datapoint like statuses, temperature, air quality, etc).
Absolutely not, having a false sense of security is much more dangerous than just being unaware of the risks.
You should check Apple’s relaxed requirements for MFI certifications:
It’s possible to:
- Implement IP/network based specifications that don’t work with the home app nor Siri
- Pair the accessories outside of the home app (especially for external hubs)
- Implement functions “not yet” supported by HomeKit Frameworks
The last point means also that it’s not possible to control those functions from the Home app, requiring a 3rd party app, that most of the time will need to contact a server.
There are many accessories that use IFTTT and/or a server based set of API to function. You can’t use these offline or without 3rd party apps.
Apple just made the secure video services *because* abuse of privacy started to happen with some of the connected video/doorbells companies.
I don't think you are reading those correctly.
It means you can use the device app alongside homekit.
For instance say you have an ecobee thermostat. You can use the ecobee app or the homekit app. But if you go into your router and block the thermostat from accessing the internet, only homekit will work.
Not to mention, We see Issues with the Circle 2 not being able to be used by the Logic app, or to be factory reset once enrolled in HomeKit Secure Video. So HKSV and HKSR (Router) show that Cameras a tied to the Home App and use the Home Hub to do the processing, and for secure routers, it blocks off the accessories from connecting to anything else and isolates them.
Also in iOS 13.2 we saw a push for all accessories in the Home App to update your accessories to the latest version in the home app, but did require you to use the 3rd party app. I hope in iOS 14 or soon that you can update the accessories right in the Home App.
This won’t happen unless Apple changes the specification of HomeKit to include “firmware update url” and make it mandatory.
Even in this case, probably will need also to add few hooks for status, such as “updating”, “updated”, “failure”, etc.
I really like the secure routers because they basically act as Pi-hole to block traffic.
Personally, I’m extremely uneasy knowing that any of my accessories would “communicate” with an unknown server to send statuses about my house availability/activity, that’s the reason I make my own accessories. But if Apple started to block by default, maybe we’ll see more privacy oriented devices on the market.
i'm an iOS user and can share that i'm seeing these queries but don't know how to confirm if these queries are a consequence of Ring app usage.
that being said, sharing of this article by the OP was motivation enough for me to check. from the query log, i've noted some queries to have occurred in the same time frame as when api calls to ring are happening.
either way, i'm one step ahead again... more domains that i do not want any involvement with being blocked.
Ok, there’s some stuff in the article that is purposefully trying to stir up trouble. Certificate pinning is a standard, recommended security best practices to prevent man-in-the-middle attacks from malicious actors.
The fact that it’s also the mechanism that EFF is using to try and determine what data is being sent to third parties does not mean that Ring is trying to keep their scary data practices from prying eyes. It means they’re employing good security practices.
Having helped run a multi-million dollar retail app program, I can tell you we used tools like Mix Panel so that we could learn everything we could about our customer. Customer data is king. The more you know about your customer, the more you can improve your services and make more money.
Should Ring and the whole app industry be more transparent? Yes, and they absolutely should include mechanisms to opt out. However, this whole article is talking about practices and tools that are generally accepted and widely used across the industry. I’m not saying it’s good. I do wish our government did more like what the EU is trying to do to protect consumers. But this article is really trying to make something sound much more shady and scary than it really is.
The camera itself doesn't [need DDoS (sic) protection], but the site that controls who can log in and use that camera does need protection. Not specifically DDOS protection, but protection from credential stuffing attacks, which are usually carried out by a botnet going through a bunch of anonymous proxies, making IP reputation scoring useless. There are even Burp Suite scripts that use AWS API Gateway and Lambda to get a near endless supply of clean random IP addresses.
Therefore, they use device fingerprinting to attempt to identify legitimate user authentication versus bot attempts. It is one of the less intrusive ways to identify a human from a bot.
You mentioned botnet protection for a camera running in my house. I'm still puzzled why anything running in my home would need protection from this? Why would it even need to connect to some site? Just keep everything in my home behind my firewall.
None of this nonsense is remotely necessary when you run home assistant + some cheap rtsp cameras
All of that is very true if you run your own internal private servers, and use cameras that only talk to those servers. That is a good setup for IT enthusiasts, but not for Joe Average user. My comments were related to why _Ring specifically_ needs to protect their _servers_ so that people can't take over accounts and view cameras that don't belong to them. It actually has nothing to do with the cameras themselves. It all has to do with the user authentication process with the server that stores the video or processes the connection to the camera. It doesn't really matter where the server is or who runs it, if it is exposed to the internet.
If your private server was exposed to the internet over some authentication-protected endpoint, like Ring must, then you would need to take steps to protect it. If you don't (or haven't), then it is probably easily accessible via a decent Shodan search soon after appearing on the internet.
Securing your local server may start by using a good password, and hoping no one attempts a brute-force attack. But you've got a live camera feed of the most beautiful thing in the universe. Your attacker is motivated and financially backed. Your next step would likely be IP-based, and you could rate-limit and use fail2ban. But your attacker is smart and financially motivated, so they have ways around those things. They have people that want to pay to view your most beautiful thing.
You need to add even more protection measures, because now 90k different IP addresses are attacking you every hour. Maybe you create an even longer pass phrase and add 2FA to the login process, which isn't too much trouble since you're the only user. That won't stop the hackers from attempting to attack your authentication endpoint. It just makes it take longer for them. But they are very determined. If you use SMS-based 2FA, they may attempt a SIM Swap attack. If you use Google Authenticator compatible codes, they just keep using random numbers in their attacks. You can't let anyone else see the most beautiful thing, so you need more protection.
You'll need another way to ensure that a human is accessing the authentication/login api and not a bot. You're running out of options at this point, but decide to add CAPTCHA to the login process. It is just you, so it isn't a big deal that it is incredibly annoying. But the hacker has free and paid tools to solve those simple CAPTCHAs, and even newer Google RECAPTCHAs. It just takes longer, but they don't and won't stop. You need more protection.
The next thing you could use is a unique device fingerprint. which is easy enough since you have a limited number of devices that change infrequently. But how do you generate a unique fingerprint that cannot be forged? That's complicated...
If you're a smart IT person, then you probably have a VPN back into your home network, and it is protected much better from password-based authentication attacks, through the use of certificates. In the theoretical example above, you could also use HTTPS client certificate authentication, which provides as much assurance as a vpn or ssh certificate.
Ring doesn't have the luxury of many of those mitigations, because their servers must live on the internet for any and all users to access them. Even elderly customers that can barely use a computer... Anyone and everyone can access the Ring authentication endpoint, including the hackers that hack for the LOLs and the ones that hack for the cash. Ring has likely tried most of the above, but operate at a much larger scale than the private server example. Even at their scale, they don't have enough forensics to identify a real device from a fake/simulated/virtual device. But they know a company that does have tools to do that, and has the scale to gather enough forensics to make it reliable. That company doesn't sell their product for private hosting, and requires many specific data points (including possible PII) in order to produce an unforgeable device fingerprint. It is in the best financial interest of the company to use this service, to help protect users that reuse passwords everywhere and that don't turn on 2FA (which is a large percentage of normal human users).
All that doesn't even touch on the other facets of developing a usable and useful large-scale customer-facing app, enabled by A/B testing, UX feedback, even UI session recordings, or crash reporting...
Wow, this turned into way too long of a post!
This isn’t news. Pretty much every mobile app has these (and not just for ad delivery). This is a big reason why companies push their mobile apps instead of mobile web; it is easier to collect telemetry. This can be used for malicious intent (selling user data), but Amazon is very likely responsible with this information.
I would love to see some financial documents leaked from these companies.
Let's say Apple pre installs Google+ on all its phones. Then I want to know how much apple got paid for this, i.e. how many cents is a users privacy worth to them. And how much money did Google make by using this data, i.e. how much was the data really worth.
Because until we have such data, companies can always hide behind phrases such as "... share with partners ... to provide relevant services" and all that nonsense.