When I had an interview at Apple I asked them why they were using PHP, and what it boiled down to was momentum. They team was able rapidly satisfy requests and could navigate the codebase easily so they had no reason to rewrite it. Which amongst other insights helped shape my viewpoint that the best tool for the job is the one that you are (or the team is) most proficient in. What business is going to wait 3x as long because of technical merits of using a different tool? It doesn't really make sense.
If you actually cared so much about victims of fraud you wouldn't be mad about a fraudster getting caught. You'd care about all the victims, not just the ones who were victimized by Somalis.
The benefit of having a team of QA engineers create tests is their differing perspectives, so with LLMs being trained to act like affirmation engines you have to wonder how that impacts the test cases it creates. Its the problem of LLMs being miserable at critiques manifesting itself in a different way.
However, in saying that, I am by no means an AI hater, but rather I just want models to be better than they currently are. I am tired of the tech demos and benchmark stats that don't really mean much aside from impressing someone who's not in a critical thinking mindset.
Sounds like an antipattern being rebranded as a solution. I shouldn't have to precisely instruct AI on how to solve every problem. I should be able to give it requirements and with its vast knowledge it should be able to understand various design elements within a system like design patterns and make the appropriate change without me needing to tell it to look for those things.
Honestly, I am inclined to think a lot of the people who are wowed by benchmarks and simple tech demos probably aren't doing very much at their day job and if they're either working on simple codebases or ones that don't have very many users(more users == more bugs found). When you throw these models at complex software projects like SOAs, big object-oriented codebases, etc. their output can be totally unusable.
I don't think its that surprising. Bash is old as dirt and scripts by definition are meant to be simple. Where AI struggles is when you add complexity like object-oriented design. That's when the effect of it trying to solve every problem in a way unique to it just takes shit off the rails. LLMs known design patterns exist but they don't know how to use them because that's not how deep learning approaches problem solving.
There isn’t a single one way to be a dedicated gamer.
Inevitably everyone has finite time and access to games and has to make choices about what to play.
As a Mac guy, I always found the game platform wars weird because even on the weakest gaming platform there are still more good games than anyone can individually play. And even on Windows, probably the strongest gaming platform, you’re still missing out on many significant games.
I totally understand buying a system because it has some game that you absolutely must play. I bought an OG Xbox back in the day because I thought I desperately needed to play Deus Ex: Invisible War when it didn’t come to Mac. Got burned on that one, but at least I had Halo before it came to Mac (and was in the end much better there than on Xbox due to expanded online multiplayer).
What I actually don’t get is folks who have to play the hot game of the week every week. Just seems expensive in terms of money, time, and space for different systems, and you only scratch the surface of the games.
Generally speaking, humans are more often than not the weakest link the chain when it comes to cyber security, so the fact that most of their access comes from social engineering isn't the least bit surprising.
They themselves are likely to some extent the victims of social engineering as well. After all who benefits from creating exploits for online games and getting children to become script kiddies? Its easier (and probably safer) to make money off of cyber crime if your role isn't committing the crimes yourself. It isn't illegal to create premium software that could in theory be use for crime if you don't market it that way.
I'm not sure this is very fair because humans are often not given the right tools to make a good decision. For example:
To gift to a 529 regardless of the financial institution, you go to some random ugift529.com site and put in a code plus all your financial info. This is considered the gold standard.
To get a payout from a class-action lawsuit that leaked your data, you must go to some other random site (usually some random domain name loosely related to the settlement recently registered by kroll) and enter basically more PII than was leaked in the first place.
To pay your fed taxes with a credit card, you must verify your identity with some 3rd party site, then go to yet another 3rd party site to enter your CC info.
This is insane and forces/trains people to perform actions that in many other scenarios lead to a phishing attack.
Don't forget magic links in email for auth and password resets training people that it's OK to click links in emails.
Yes, we've (the software industry) been training people to practice poor OpSec for a very long time, so it's not surprising at all that corporate cybersecurity training is largely ineffective. We violate our own rules all the time
Has anyone invented an alternative to that yet? I could imagine emailing you a code to enter in a specific part of a site to get you to the right link, but then people could just scan all the codes. To solve that you could make the codes long 64bit strings but then that's too hard to remember so you could just provide functionality to automatically include that info to get you to the site but then that's just a link again.
Maybe if you expected everyone to copy-paste the info into the form? That might work
I recently discovered that Microsofts SSO doesn't guarantee email veracity. Basically you can spoof emails via ActiveDirectory, so if a site supports Microsoft's SSO and doesn't do a second verification, then someone could login to your site with someone else's email.
I mean, what's the point of their SSO if you're just going to need to verify it with an email code anyways?
It’s easier/more complicated than that. Use 6 digit codes, tied to a specific reset session, with only 3 attempts allowed per-session, and sessions lasting only 5 minutes.
Don't allow HTML rendering of <a> element where href links to another URL than shown, don't allow any (java)scripts to run, or at least give user a warning that he is about top open a new window into domain XYZ.
This is how I found out quite a few scams (apart from obvious ones with improper wording or visual formatting, but those are on purpose so bad to catch only most unskilled or gullible, ie your grandma)
About 10 years ago, I got an email from Microsoft of all people(!) which to any reasonably security-trained person would look entirely like a phishing email:[0]
1. It said "Dear User" instead of a name/username;
2. It talked about how they were upgrading their forum software and as such would require me to re-login;
3. It gave me a link to click in the email without any stated alternative;
4. It warned me that if I didn't do this, I would no longer be able to access the forum;
5. The domain of the URL that the link went to was not microsoft.com, but a different domain that had "microsoft" in it.
It was a textbook example for how a phishing email would look, and yet it was actually a legitimate email from Microsoft!
I haven't had any others like it since, but that was an eye-opener for sure.
If I want to use a passkey on my phone, I have to bio authenticate into it. Similarly, with Windows Hello as a passkey provider, via my camera scanner. It works well and is pretty seamless, all things considered. I prefer it to the email/code/magic link method.
The mechanics are a solved problem by sqrl I think, but it's too much responsibility for basically everyone.
You really do fully own and control your identity, and if you botch it and lose your top level keys, no one else can give you a "forgot password" recovery.
If this level of unforgiveness were dropped onto everyone overnight, it would mean infinite lost life savings and houses and just mass chaos.
Still I think it would be the better world where that was somehow actually adopted. The responsibility problem would be no problem if was simply the understood norm all along that you have this super important thing and here is how you handle it so you don't lose your house and life savings etc.
If you grew up with this fact of life and so did everyone else, it would be no problem at all. If it had been developed and adopted at the dawn of computers so that you learned this right along with learning what a compuer was in the first place, no problem. It's only a problem now that there are already 8 billion people all using computer-backed services without ever having to worry about anything before.
The real reason it's never gonna happen is exactly because it delivers on the most important promise of end user ultimate agency and actual security.
No company can own it, or own end users use of it. It can not be used for vendor lock in or data collection or profiling or government back doors or censorship or discrimination or any of the things that holding someone's password or the entire auth technology can be used for to have control over users.
No (large) company nor any government has any interest in that, and it's way too technical for 99.99% of people to understand the problems with all the other popular auth systems so there will be no overwhelming popular uprising forcing the issue, and so it will never happen.
A method already exists (I think), that solves the hard problems and delivers the thing everyone says they want, and everything else claims to be groping for, but we will never get to use it.
I think this is the way forward. We shouldn't continue relying on email (or proving ownership over an email address for that matter) as identity.
Public/private keys with a second factor (like biometrics) as identity I think is a good option. A way to announce who you are, without actually revealing your identity (or your email address).
Tbh that's how all the age verification crap should work too for the countries that want to go down that road instead of having people upload a copy of their actual ID to some random service that is 100% guaranteed going to get breached and leaked.
Biometrics might be useful in establishing a (PKI) key, but are not suitable for the key itself.
"Something you have" is far more useful, especially if that something is itself cryptographically-based. Yubikeys, RSA fobs (generating one-time codes), and wearable NFC tokens (rings, amulets), and the like, which may be autheticated in part based on biometrics and other attestation, but are themselves revokable, would be a far better standard.
What the General Public can be expected to utilise willingly and effectively seems to be the larger problem, as well as what commercial and governmental standards are established.
This is very much a US issue, largely because the government outsources everything to the private sector. This proliferation of random websites and shady 3rd parties is one of the consequences of this.
Don't forget credit checks when you apply for an apartment! "Go to this website sent via e-mail from someone you only know through a craigslist ad and enter all of your PII. On top of that about 2/3 of what is listed actually is phishing attempts and good luck telling the difference"
Like when you suddenly have to move to a different city due to an unexpected job change and are trying to schedule as many viewings in one weekend as possible?
Reminds me of a co-founder of an adtech company I know. They are a platform that buys inventory using automated trading, mostly mobile, and they realized that most of their customers were all clickfraud / scammers / etc. He didn’t want to go into too much detail.
But he shrugged it off.
I bet there are quite a few shops online that may sell gift cards that are used in money laundering schemes. Bonus points if they accept bitcoin.
But those are all quite implicitly used by cybercrime. I can imagine there are quite a few tools at their disposal that are much more explicit.
Worked at a place that used to do a kind of arbitrage between adclicks and traditional print. A large percent of traffic, especially mobile, was obviously either toddlers or bad bots; yet we were billing our customers for the 'engagement'.
I worked at a $xxxB company that had an internal red team. They ran almost as a separate company but were housed in one of our offices.
I was involved in probably 15 operations with them while I was there. They would usually get C&C within six hours, every single time it was phishing lol.
Insofar as every security mechanism was made by a human, yes.
But if we're holding users accountable because 1 out of every 100 clicks a link in a phishing email like clockwork, we're bad at both statistics and security.
>It isn't illegal to create premium software that could in theory be use for crime if you don't market it that way.
Who is making money off of selling premium software, that's not marketed as for cybercrime, to non-governmental attackers? Wouldn't the attackers just pirate it?
This type of software is being sold on many forums, both on the clearnet and darknet.
> Wouldn't the attackers just pirate it?
Sometimes the software is SaaS (yes, even crimeware is SaaS now). In other cases, it has heavy DRM. Besides that, attackers often want regular updates to avoid things like antivirus detections.
I assume the forums you're talking about are cybercrime forums. So I think that counts as "marketed for cybercrime". I'm asking if there's anything not marketed for cybercrime.
reply