Unsupervised Learning: No. 185

Unsupervised Learning - Un pódcast de Daniel Miessler

Categorías:

The Telegraph has found strong links between Huawei employees and Chinese intelligence agencies. The Huawei counter was that this was extremely common among telecom companies, and that it wasn't a big deal. The counter to that counter was, basically, "Well, then why did you try to hide it?" /gg More The NPM security team caught a malicious package designed to steal cryptocurrency. A lot of these packages work by uploading something useful, waiting until it's used by lots of people, and then updating it to have the malicious payload. My buddy Andre Eleuterio did the IR on the situation there at NPM, and said they're constantly improving their ability to detect these kinds of attacks. Luckily NPM's security team had the talent and tooling to detect such a thing, but think of how many similar companies aren't so equipped. I think any team that's part of a supply chain should be thinking about this type of attack very seriously. More Federal agents are mining state DMV photos to feed their facial recognition systems, and they're doing it without proper authorizations or consent. To me this has always been inevitable because—as Benedict Evans pointed out—it's a natural extension of what humans already do. You already have wanted posters. You already have known suspects lists. And it's already ok for any citizen or any cop to see any person on that list and report them. In fact it's not just possible, it's encouraged. So the only thing happening here is that process is becoming a whole lot more aware (through more sensors), and therefore more effective. Of course, any broken algorithms that identify the wrong people, or automatically single out groups of people without actual matches, those issues need to be snuffed out for sure. But we can't expect society to not use superior machine alternatives to existing human processes, such as identifying suspects in public. That just isn't realistic. Our role as security people should be making sure these systems are as accurate as possible, with as little bias as possible, by the best possible people. In other words, we should spend our cycles improving reality, not trying to stop it from happening. More

Visit the podcast's native language site