Question for the technically minded - VPN and data protection

Squidfayce

Eats Squid
Like I said look for the technical articles for threatmetrix. Device ID is but one data point in the digital identity map. There are thousands for any given individual user and they will vary depending on how a business wants to implement threatmetrix.

The technical articles will talk to the other data points, ip, true ip, your browser fingerprints, IMEI (pretty sure, can't recall), velocity of visits to sites, how it makes associations between your activity and devices you use and where you use them. Your head will spin.

AT our business we can tell when a newly activated burner has been probing us and is being used to try an access accounts or submit fraudulent applications because of the device age and it's newness to the aggregated digital map etc. You can then watch that same device, in real-time attempt to do wierd shit all over your networks and sites and finally give up in frustration because we basically feed our systems live device data that use automated controls to say "fuck off, there's nothing for you here".

We can ascertain real client devices easily and most of the time work out when they use a new phone to access our services without needing to bother them. We do this by cross referencing device info we scrape from that phones session on our sites to recent activity and location data stored and curated by threatmetrix in their data centers. We can 90% of the time tell whether it's a client with a new phone or somone being dodgy. Most people don't get a new phone and jump on to do financial stuff, so for those that do, they get flagged and we can quickly and in an automated way determine the levels of shonk and decide on appropriate actions. The various configurations we have set up will sometimes send them a verification email, trigger 2 factor authentication request, alert a secondary number to the attempted access or schedule a call from a fraud operator before anything serious can occur. All depends on what they tried to do, what else they have open on their phones, their activity, history etc.

I cannot give you everything you want on this which is why I keep directing you to go and find the technical articles. There are enough published for this software. It's a commercial product and isn't hiding anything. I just don't know what's going to be of value to you.

Just know that this sort of stuff is operating at the AWS data ceneter level and not just at a business level which is why it's so powerful in making the association maps with zero friction.
 
Last edited:

Squidfayce

Eats Squid
Here's an older article that tLks a little about threatmetrix in a non standard setting and how it works there has several links to privacy related matters, Lexis nexis connections to law enforcement supply etc

 

johnny

I'll tells ya!
Staff member
Nice, thank you!

Am I correct in saying that it's next level/highly sophisticated probabilistic matching?
 

Squidfayce

Eats Squid
Probalistic matching plays a part, especially if the activity is strange or unexpected like that of a new device with no online history. That's the essence of the fraud protection aspect of the software. But mostly it's not making assumptions. Most of the time it simply knows who you are, without doubt. New devices quickly generate enough data from activity, location and browsing to be able to be matched to users and their daily ecosystem of devices and habits etc. That's all verifiable too as you start logging into websites, banking, apps etc.

Remember most of this fraud prevention stuff has massive surveillance capabilities intrinsically built into it.
 
Last edited:

Squidfayce

Eats Squid
this might be of interest too, if you haven't spotted it in the article above.


Its pretty thorough in it its topics and about the methods, but was published in 2017, so some of the key concepts may be superseded with more fucked up ways to achieve the same outcomes. Interesting reading none the less.

think you might find the stuff about data brokers interesting and the key developments sections.
 

Squidfayce

Eats Squid
Yes, being used for good there, fucking scary though
yeah its the whole "if youre not doing anything wrong, you've got nothing to worry about" scenario. Though the concerns are pretty real. No consent, no opt in/out necessary, youre device is being probed by algos.

This one seems like a no brainer - the abuse images already exist, AI can make the matches efficiently.

Though the potential for bias in other things this sort of approach could be used for is the worry. I.e. searching for terrorist associations - we already know facial recognition has coded bias issues only discovered in the last few years.

i guess time will tell.
 

madstace

Likes Dirt
In the case of child abuse material, its objective. there's no reason you could be in possession of it on your phone and it not be wrong.

I've timestamped the URL to skip to my point but forget the overreach/abuse of these sorts of systems by governments, this illustrates that AI/ML is far from infallible. I'm not going to argue for a second that identifying pedos isn't a good use of such tech, but as with any of these systems HUMAN oversight is REQUIRED! Robodebt was bad enough, potentially ending up on a sex offenders register because of unchecked automated systems is just as scary, if not more so.
 

Squidfayce

Eats Squid

I've timestamped the URL to skip to my point but forget the overreach/abuse of these sorts of systems by governments, this illustrates that AI/ML is far from infallible. I'm not going to argue for a second that identifying pedos isn't a good use of such tech, but as with any of these systems HUMAN oversight is REQUIRED! Robodebt was bad enough, potentially ending up on a sex offenders register because of unchecked automated systems is just as scary, if not more so.
I'm fairly certain no one is going on a register before somone eyeballs said images on your devices. The article states somone would review the image before notifying law enforcement.
 
Last edited:

Squidfayce

Eats Squid
Also there's a difference between using an algo to identify known images vs images that present certain traits. The tech described in the article was going to scan phones for known images of child abuse.
 

madstace

Likes Dirt
I'm sure no one would have thought anyone would be getting automated, unvetted and likely incorrect invoices for thousands of dollars of repayable benefits, but hey, now we have attributable suicides to such a system. It's not about this particular instance, it's about organisations that feel systems doing these sorts of tasks don't require any human oversight before a finding becomes a real-world event for someone.

Even if their system is an image database match, that presents potential loopholes that could cause more harm than good. In any event, I don't trust anyone that discounts the human element, good or bad.
 

Squidfayce

Eats Squid
Came across a reference to an internal google video that apparently was leaked a few years ago called "the selfish ledger" supposedly from one of their "ultra super duper tech laboratory groups", GoogleX or whatever its called.

Really creepy stuff.

I've not looked into the veracity of it, but it kind raises the question "why is this still online at a Google owned platform if it was so super secret?" but at the same time, there's lot of reason to not spend time trying to scrub it from these sites. Anyway, here is a link (one of many) from somone hosting on their Channel. If it's legit, it has some pretty significant implications. Keep with it, starts slow, but context is needed

 

Litenbror

Eats Squid
This is turning out to be a handy add on to my Duck Duck Go app. Currently in beta testing but seems to work well at blocking trackers from installed apps and not just web pages. Only installed last night!

392304
 

Squidfayce

Eats Squid
Mobile carriers had been caught injecting special tracking mechanisms into their user’s web surfing activities. Because they are embedded at the network level, users cannot block these “supercookies”. Such tracking mechanisms allow for the creation of rich user data profiles and for
their use in advertising and tracking.

I have no doubt your app blocked 350 tracking attempts. I doubt it made a difference.

i posted this above, and its quite indepth, but if you get even part way through it, you will realise how futile it is.
 
Top