Tech Freedom

Weekend Edition 53: AI ALL the Things & More

WE 53 Blog
3 Letter Agencies Warn of “New” Ransomware Threat
Lastpass: Strengthen Your Passwords, Y’all
NeuraLink Enters Initial Human Trials
AI Regulation: Possible?
Moar AI… AI ALL THE THINGS
AI Legal Troubles
TIkTok Troubles
Google Antitrust Trial Phase 1: Start
Signal Researching Quantum-Proof Encryption

WE 1 – Warning About Snatch Ransomware
The Feds (FBI and CISA) are warning us about a new evolution of the Snatch, ransomware-as-a-service package. Some version of it or other has been floating around for about 5 years now. Here’s what it has been seen doing lately (incidentally, this only hits Windows machines), once it infects a machine, it will force it to reboot into Safe Mode, and from there, it will run roughshod over your files, encrypting them without anything to stop it. These US agencies issued the following advice to prevent infection or limit the reach of an infection:
§ Reduce threat of malicious actors using remote access tools by:
§ Auditing remote access tools on your network to identify currently used and/or authorized software.
§ Reviewing logs for execution of remote access software to detect abnormal use of programs running as a portable executable [CPG 2.T].
§ Using security software to detect instances of remote access software being loaded only in memory.
§ Requiring authorized remote access solutions to be used only from within your network over approved remote access solutions, such as virtual private networks (VPNs) or virtual desktop interfaces (VDIs).
§ Blocking both inbound and outbound connections on common remote access software ports and protocols at the network perimeter.
§ Implement application controls to manage and control execution of software, including allowlisting remote access programs.
§ Application controls should prevent installation and execution of portable versions of unauthorized remote access and other software. A properly configured application allowlisting solution will block any unlisted application execution. Allowlisting is important because antivirus solutions may fail to detect the execution of malicious portable executables when the files use any combination of compression, encryption, or obfuscation.
§ Strictly limit the use of RDP and other remote desktop services. If RDP is necessary, rigorously apply best practices, for example [CPG 2.W]:
§ Audit the network for systems using RDP.
§ Close unused RDP ports.
§ Enforce account lockouts after a specified number of attempts.
§ Apply phishing-resistant multifactor authentication (MFA).
§ Log RDP login attempts.
§ Disable command-line and scripting activities and permissions [CPG 2.N].
§ Review domain controllers, servers, workstations, and active directories for
§ Audit user accounts with administrative privileges and configure access controls according to the principle of least privilege (PoLP) [CPG 2.E].
§ Reduce the threat of credential compromise via the following:
§ Place domain admin accounts in the protected users’ group to prevent caching of password hashes locally.
§ Refrain from storing plaintext credentials in scripts.
§ Implement time-based access for accounts set at the admin level and higher [CPG 2.A, 2.E].
So that said, be careful, and most of you who watch or listen or read this probably aren’t IT pros at major corporations which would be worth targeting. The main notable targets have been government agencies or big companies. If you do not work somewhere where there is sensitive information which would be worth holding for a ransom, you likely don’t have too much to worry about from this particular malware attack. Connor may have other thoughts, but most individuals probably don’t have anything much to worry about from the gangs who rent access to this tool.
https://www.techradar.com/pro/security/fbi-and-cisa-issue-warning-about-dangerous-new-ransomware-strain
https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-263a

WE 2 – LastPass: We’re Gonna Need Y’all to Step Up
I don’t know if you remember, but late last year, there was a major data breach at the well-known password manager, LastPass. Now, they are requiring all users to lengthen their master passwords to at least 12 characters to slow down future attackers. This is a good idea, but how about changing the way they do business? Why is this on their clients to clean up the mess? Isn’t there something more that LastPass could or should do to prevent this sort of mess in the future? Not much else to say here, other than to suggest that you go FOSS and use a self-hosted BitWarden or something like that, so that your passwords aren’t stored in some cloud somewhere that simply begs to be hacked because it is centralized data.
https://www.pcmag.com/news/lastpass-requires-users-to-update-master-password-to-at-least-12-characters

WE 3 – We Are Borg, Resistance Is Futile
Neuralink, AKA Musk’s pet project to trans-humanize us by embedding chips in our brains which can link up with computers. This is cool on a purely technological level, but creepy and stupid AF on a practical one. I mean, what nerd hasn’t ever dreamed of being able to work their computer at the speed of thought, rather than being limited by how fast you can type or move the mouse physically? On the other hand, this is a proprietary chip that Elon wants to put in everyone’s brains. Could they have a facility for reverse communication as well, such that suggestions from the computer or your smart tv or radio or streaming service could be placed directly into your brain? I’m not a fear guy, but this freaks me out, genuinely. If it were limited to 1-way linkage, and only ever used for people who cannot physically work with their phones or computers, and only gathers absolutely crucial data to make the thing work better vs profiling users, then I MIGHT be slightly more open to it. At this point, I trust this project about as much as I trust Musk… About as far as I can throw him. Early in this piece, and in the title, I mentioned trans-humanism and the Borg, from Star Trek. I have reason to believe that this is the end that they are aiming for, where humanity becomes merged with machines and humans are no longer really human anymore. We’ve talked about trans-humanism before, but it has been a minute, so let’s get into that, briefly. Trans-humanism is the belief system which states that humanity as it is, is flawed, but we have the technological know-how to fix it, and that we should. Think of this as a combination of the Borg and the 6 Million Dollar Man/ Robocop/ Inspector Gadget, only in real life. This is where these people, including Musk and Gates, want to take the human race. The Borg, if you didn’t know, live a collective existence, where they all hear the thoughts of the collective and have specific tasks to perform, and that is the extent of their lives. No individuality, any one drone can be modifed to replace any other. No private thoughts, no feelings, just the collective. But, back to present day reality, and the reason we are talking about this: Neuralink has now been approved by the FDA to begin human clinical trials. That freaks me the hell out, tbh.
https://www.pcmag.com/news/elon-musks-neuralink-puts-out-call-for-humans-to-try-its-brain-implant

WE 4 – AI Regulations: Possible, or Not?
Some think yes, others are not so sure. Others are hair-on-fire about the whole thing and are desperate to get this done, ASAP. The UK and the US are working independently to understand and try to rein in the whole AI phenomenon. The UN is also starting to weigh in, and has a preliminary report due on the issue by the end of this year. Jimmy Wales, of Wikipedia, believes that it is not doable to try to control it at this point. All of these would-be regulators, he thinks –probably accurately- don’t understand it hardly at all. In his view, the genie is out of the bottle. He compares the governmental efforts to regulate it to the notion of trying to control something like Photoshop now. Another skeptic about global regulatory efforts is Pierre Haren, who has been working on AI for 45 years, and was part of the team that created Watson at IBM. Remember Watson? I do. No one talks about that AI anymore, but then it wasn’t “generative” like GPT, LLAMA, or PaLM are. His skepticism stems from the geopolitical realm, more than the technical realm. His concerns are quite valid, about countries who would choose to disregard regulations (which they have a habit of doing in other areas already), if they don’t fit with their desired ends. That said, Mr. Haren is “flabbergasted” by the proliferation of generative AI tools and models, he thinks that it is no mere parrot. Perhaps he knows better than I do, having worked on the field for longer than I’ve been alive, but that level of focus may also cloud his judgment. That is allot of time to pour into a single type of project. So far, we have been talking about the scenario at a global level, as the UN has its sights set on solving regulatory issues there, so that all member states must adhere to those regulations (we all know how well that has gone with even our own country, much less “rogue” nations like North Korea, Iran, or Pakistan). Now I’m going to turn to the UK and their efforts, more locally.
In the UK, their CMA (Competition and Markets Authority) has been working feverishly to develop a set of AI foundation model principles. Here are their suggestions (all principles are, really):
· Accountability: AI foundation model developers and deployers are accountable for outputs provided to consumers.
· Access: Ongoing ready access to key inputs, without unnecessary restrictions.
· Diversity: Sustained diversity of business models, including both open and closed.
· Choice: Sufficient choice for businesses so they can decide how to use foundation models.
· Flexibility: Having the flexibility to switch and/or use multiple foundation models according to need.
· Fair dealing: No anti-competitive conduct including self-preferencing, tying or bundling.
· Transparency: Consumers and businesses are given information about the risks and limitations of foundation model-generated content so they can make informed choices.
These seem innocuous enough, but do they actually DO anything? Usually, these sorts of things don’t do much, as they are just governmental suggestions, but they are thinking hard about all of this. They are concerned about many things, perhaps worried or even panicked would be better words. Their prime concern is with market dynamics and whether the playing field is fair for the small, or the new to the market who aren’t named Microsoft, OpenAI, Google, or any of the other “big guys” in the field right now. It is all more or less nonsense. Maybe I’m being a little harsh, but government controls over free market systems are a non-starter for me. Do I see that they are currently, if begrudgingly, necessary, but I hate the notion of needing a nanny state to boss around companies which are too big to be good for humanity.
https://www.bbc.com/news/business-66853057
https://www.computerworld.com/article/3706991/uk-regulator-outlines-ai-foundation-model-principles-warns-of-potential-harm.html

WE 5 – AI ALL the Things
Microsoft is injecting all of its software with its GPT-4 based Copilot AI, from Windows to MS365, to Photos, to Paint, to Spotify and Edge browser. This is starting to roll out on Tuesday. Are you comfortable with all of your Microsoft products being empowered with AI to gather all of your data? Google itself is adding Bard to Gmail, Youtube, Docs, and across much of its cloud stack, as we speak. So, per the title of this piece, AI ALL the Things. Most of Microsoft’s software stack is now “AI-boosted”. Google’s cloud stack, plus Youtube, is largely Bard-ed now. This privacy assault purely motivates me to ditch these solutions as much as possible. AI is a privacy nightmare, and Microsoft is driving it off the cliff in order to gather as much data on its users as possible, as if the rootkit-cum-OS that is Windows, plus all of Microsoft’s other software stack wasn’t enough in terms of scraping data from their users. Again, same story with Google/Alphabet, right now, since many use their cloud as their internet OS: from Search, to email, to entertainment/information, to other cloud software (Docs, Sheets, etc). On one hand, this is cool. That is, if we do not consider anything like privacy or security in the midst of this thing. I mean, we have talked about how Microsoft has almost bragged about the fact that they single-handedly created this mad rush to implement generative AI such as it is right now. The reality is that OpenAI, Google, and the rest of the industry around generative AI products were operating under a much more careful way of approaching the issue, with privacy as front and center as a model which requires as much data as these large language model based AIs require in order to do what they do and continue to improve. The whole thing is sketch to me, anyway. However, my primary concern is for privacy and honoring the rights of individuals versus just driving ever harder for greater and greater levels of tech progress. The question is progress toward what, exactly? A future where humans never use their brains for anything taxing or rewarding? A future where we cannot think for ourselves? Maybe we are already mostly there, which is a disturbing thought. My response to that is, stand up and think for yourselves. Don’t just take my opinions or anyone else’s as gospel. Challenge everything. Do your own damn research and make up your own mind. I am not perfect in this regard, but I do try to grow in that regard. I will never give up my autonomy, such as it is, to a machine. I do not ever plan to use these things, even if it puts me behind the curve. My content will always be mine. My thoughts will be mine as much as humanly possible.
https://www.pcmag.com/news/microsofts-copilot-ai-coming-to-windows-on-sept-26
https://www.engadget.com/microsofts-latest-windows-11-update-drops-on-september-26-163553126.html
https://www.computerworld.com/article/3707074/google-adds-its-bard-chatbot-to-gmail-youtube-docs-and-other-apps.html

We 6 – OpenAI Sued… Again
This time by George R.R. Martin, writer of the book series which was adapted into the disgusting if smash hit, Game of Thrones. He has another well-respected author, John Grisham on board with the suit. They claim that the GPT models are infringing on their copyrighted materials in order to make itself “smarter”. The case alleges that the LLMs in question have “engaged in theft on a mass scale” because the authors’ works have been used without giving them proper remuneration for utilizing their intellectual property. Is it really a copyright claim or is it that they are afraid of being replaced, as the “expert” in the article cited below suggests? I do not believe that generative AI, such as is publicly available now can “create” anything unique. It is all derivative of the copious amounts of data that they are able to scrape from around the web, whether locked behind a paywall or not. I am a firm supporter of intellectual property and copy rights, whether Connor is too much of a socialist to agree with that stance or not. We will butt heads on this for a long time, no doubt, and the argument on the air was likely something fun to hear, but the bottom line is that as someone who is a content creator of sorts, a blogger and streamer who takes news articles and applies his own spin to them to make usually fairly left-leaning news stories read in as unbiased of a way as possible, I cannot fully espouse not providing means for a creator of anything which is valued by members of the public at large to make a living from their labor, whether a labor of love or passion or not. Connor feels that anything put on the internet should be free game. To be clear, he is no more a fan of AI and the generative AIs out there than I am. However, he feels that IP and copyright are stupid, and that artists and authors should simply labor for the love of what they are doing and keep their day jobs to feed their families, unless they are commissioned to produce a piece or other creative output. We recently had this discussion, and decided that we were never likely to see eye to eye on this issue, and that he is a walking ball of contradictions.
https://www.bbc.com/news/technology-66866577

WE 7 – DALL-E 3 Released
OpenAI’s image generation AI has had its third major version drop recently. It is now able to accurately place text in images (could be a memer’s dream). Like GPT-4, this will be baked into MS Copilot for all of its products. This is a privacy nightmare. I feel like I’m beating a dead horse at this point though. I don’t have much of benefit or much that is positive to say about this development, so I will cut this section short.
https://www.pcmag.com/news/openai-unveils-new-and-improved-ai-image-generator-dall-e-3

WE 8 – TikTok Back in the Legal Hotseat
TikTok has been used to start allot of mess because people have so little discernment and common sense these days. This has led to some good things, but also many very bad ones. Certain viral videos have spurred spurious murder accusations, ginned up riots in various places, and created mayhem where none should have existed before. This is a dangerous tool. Unscrupulous people who just want the eyeballs and notoriety have fabricated stories and even interfered with the proper carriage of justice. Internet sleuthing has its place, don’t get me wrong. Do your research as best you can so that you can form educated opinions about things, rather than just following the herd. That said, not many can really be trusted as sources of information. Trust but verify. Look for real evidence, whether it supports the opinion you prefer or not. Disconnect from the matrix. Stop doom scrolling on social media sites, whether YouTube, TikTok, Telegram, X, or anything else. Use your brain. Don’t just get caught up in the furor of the moment. End the outrage cycle. TikTok is not being sued for this, yet, but perhaps it should be. Then again, do we hold the gun responsible in a shooting or the shooter? If the user base dried up overnight, TikTok wouldn’t last long. I know I’m nobody, with a next-to-nil following at this point, but for the love of all that is holy, break your addictions to these platforms. Get away.
The EU is fining TikTok for privacy violations in regard to childrens’ data on the platform. Let me reiterate that: the EU is slapping TikTok with a $368 million fine for not protecting kids’ data and for utilizing dark patterns on the platform to keep people from limiting the platform’s data gathering apparatus. The platform is, of course, taking issue with the ruling, protesting that they have remedied most of the problems which the Irish Data Protection Commission found with the way that the platform did things with user data. We will see if this fine sticks. This is not the first time that the platform has been slapped in the Eurozone, either. The UK fined them for similar issues back in April, but that fine was basically a slap on the wrist, at $16 million. They make that sneezing at this point. If these slow boiling regulators are cracking down on them, we have a couple of questions to ask ourselves:
1) If the privacy protections on the platform (which exists to gather data on its users, by the way) are so lax, then why do we use it?
2) If it has destroyed lives, then why continue using it?
https://www.cnn.com/2023/09/15/tech/tiktok-fine-europe-children/index.html
https://www.bbc.com/news/technology-66719572

WE 9 – Google’s Battle is Just Beginning
The DOJ’s suit about Google search’s monopolistic practices is just beginning. They have been in discovery for the better part of a couple of years now, but opening arguments have just happened in the last week or two as I write this. I wish I had seen this article last week, but here we are. The government’s case is ultimately similar to ones the giant has faced elsewhere in the world, that its dominance in search and ads are not natural, but created through the company’s ability to generate contracts with browser and other app creators to embed their engine as the default (even in Firefox), then punish them for breaking that contract. Are you still one of the 89% of the market which utilizes Google Search instead of taking your data elsewhere? Why is that? Convenience? Haven’t been convinced that they do not neutrally serve results, but base them on their own algorithms and the ad spend of the companies which pay for it? I could tee off on this for a dog’s age, but I want to focus on the case at hand. To be clear, I am not saying that their results are bad, just biased in very intentional ways. If the engine didn’t work, the world would have rejected it long ago, the way they did with Ask.com and other similar search engines in days gone by. Remember Ask.com? Maybe I date myself. Yeah, I’m dating myself. Ask was originally Ask Jeeves, which was one of the first search engines out there, and one of the first which could understand full sentence queries. Google was a joke in comparison back then. Google’s dominance in the market is what is in question, whether it is natural and due to innovation, or artificial and due to anti-competitive practices which have stifled or even extinguished competitors. The government’s case hinges on the latter supposition. I hope that Alphabet gets its ass handed to it, and is broken up over its gross disregard for user privacy. That is just me, though. What do you guys think? Will this case actually lead to something positive for the public, or not?
https://www.computerworld.com/article/3706516/gloves-come-off-during-day-one-of-googles-antitrust-trial.html

WE 10 – Signal to Boost Encryption
Signal is already moving to strengthen its encryption algorithms to prevent quantum computers from breaking it. This may seem premature, but if you look at what quantum computers can do vs traditional systems (and Connor doubtless will contradict what I’m about to say), it is important to try to stay ahead of the curve. Quantum computers take advantage of the uncertainty of things at the quantum level to add a third option to the traditional binary of electronic computing. This third option allows for, according to some, more data to be processed at a far higher rate of speed than in the fastest and most powerful of traditional computers.This can make the complex math which underlies much of cryptography to become child’s play to crack and decipher, rendering previously strong forms of encryption essentially pointless, as even brute forcing it becomes far more doable than on the most powerful of server clusters and super computers, much less common consumer hardware. Quantum computing is not readily available to the public, much less (less than state-level) bad actors, at this time, but is being worked on at a similar clip to large language models right now. This could lead to it being a real issue in the next 5-10 years. I am actually more concerned about that than I am about generative AI morphing into a general AI that can actually think for itself and take over the world. That said, though, it is quite a ways off, but I am glad that the team at Signal is already working on this issue.
https://www.techradar.com/pro/security/signal-is-adding-quantum-level-encryption-to-help-keep-customers-safe