Question for the technically minded - VPN and data protection

Squidfayce

Trigger happy
Like I said look for the technical articles for threatmetrix. Device ID is but one data point in the digital identity map. There are thousands for any given individual user and they will vary depending on how a business wants to implement threatmetrix.

The technical articles will talk to the other data points, ip, true ip, your browser fingerprints, IMEI (pretty sure, can't recall), velocity of visits to sites, how it makes associations between your activity and devices you use and where you use them. Your head will spin.

AT our business we can tell when a newly activated burner has been probing us and is being used to try an access accounts or submit fraudulent applications because of the device age and it's newness to the aggregated digital map etc. You can then watch that same device, in real-time attempt to do wierd shit all over your networks and sites and finally give up in frustration because we basically feed our systems live device data that use automated controls to say "fuck off, there's nothing for you here".

We can ascertain real client devices easily and most of the time work out when they use a new phone to access our services without needing to bother them. We do this by cross referencing device info we scrape from that phones session on our sites to recent activity and location data stored and curated by threatmetrix in their data centers. We can 90% of the time tell whether it's a client with a new phone or somone being dodgy. Most people don't get a new phone and jump on to do financial stuff, so for those that do, they get flagged and we can quickly and in an automated way determine the levels of shonk and decide on appropriate actions. The various configurations we have set up will sometimes send them a verification email, trigger 2 factor authentication request, alert a secondary number to the attempted access or schedule a call from a fraud operator before anything serious can occur. All depends on what they tried to do, what else they have open on their phones, their activity, history etc.

I cannot give you everything you want on this which is why I keep directing you to go and find the technical articles. There are enough published for this software. It's a commercial product and isn't hiding anything. I just don't know what's going to be of value to you.

Just know that this sort of stuff is operating at the AWS data ceneter level and not just at a business level which is why it's so powerful in making the association maps with zero friction.
 
Last edited:

Squidfayce

Trigger happy
Here's an older article that tLks a little about threatmetrix in a non standard setting and how it works there has several links to privacy related matters, Lexis nexis connections to law enforcement supply etc

 

johnny

I'll tells ya!
Staff member
Nice, thank you!

Am I correct in saying that it's next level/highly sophisticated probabilistic matching?
 

Squidfayce

Trigger happy
Probalistic matching plays a part, especially if the activity is strange or unexpected like that of a new device with no online history. That's the essence of the fraud protection aspect of the software. But mostly it's not making assumptions. Most of the time it simply knows who you are, without doubt. New devices quickly generate enough data from activity, location and browsing to be able to be matched to users and their daily ecosystem of devices and habits etc. That's all verifiable too as you start logging into websites, banking, apps etc.

Remember most of this fraud prevention stuff has massive surveillance capabilities intrinsically built into it.
 
Last edited:

Squidfayce

Trigger happy
this might be of interest too, if you haven't spotted it in the article above.


Its pretty thorough in it its topics and about the methods, but was published in 2017, so some of the key concepts may be superseded with more fucked up ways to achieve the same outcomes. Interesting reading none the less.

think you might find the stuff about data brokers interesting and the key developments sections.
 

Squidfayce

Trigger happy
Yes, being used for good there, fucking scary though
yeah its the whole "if youre not doing anything wrong, you've got nothing to worry about" scenario. Though the concerns are pretty real. No consent, no opt in/out necessary, youre device is being probed by algos.

This one seems like a no brainer - the abuse images already exist, AI can make the matches efficiently.

Though the potential for bias in other things this sort of approach could be used for is the worry. I.e. searching for terrorist associations - we already know facial recognition has coded bias issues only discovered in the last few years.

i guess time will tell.
 

madstace

Likes Dirt
In the case of child abuse material, its objective. there's no reason you could be in possession of it on your phone and it not be wrong.

I've timestamped the URL to skip to my point but forget the overreach/abuse of these sorts of systems by governments, this illustrates that AI/ML is far from infallible. I'm not going to argue for a second that identifying pedos isn't a good use of such tech, but as with any of these systems HUMAN oversight is REQUIRED! Robodebt was bad enough, potentially ending up on a sex offenders register because of unchecked automated systems is just as scary, if not more so.
 

Squidfayce

Trigger happy

I've timestamped the URL to skip to my point but forget the overreach/abuse of these sorts of systems by governments, this illustrates that AI/ML is far from infallible. I'm not going to argue for a second that identifying pedos isn't a good use of such tech, but as with any of these systems HUMAN oversight is REQUIRED! Robodebt was bad enough, potentially ending up on a sex offenders register because of unchecked automated systems is just as scary, if not more so.
I'm fairly certain no one is going on a register before somone eyeballs said images on your devices. The article states somone would review the image before notifying law enforcement.
 
Last edited:

Squidfayce

Trigger happy
Also there's a difference between using an algo to identify known images vs images that present certain traits. The tech described in the article was going to scan phones for known images of child abuse.
 

madstace

Likes Dirt
I'm sure no one would have thought anyone would be getting automated, unvetted and likely incorrect invoices for thousands of dollars of repayable benefits, but hey, now we have attributable suicides to such a system. It's not about this particular instance, it's about organisations that feel systems doing these sorts of tasks don't require any human oversight before a finding becomes a real-world event for someone.

Even if their system is an image database match, that presents potential loopholes that could cause more harm than good. In any event, I don't trust anyone that discounts the human element, good or bad.
 
Top