• 1 Post
  • 44 Comments
Joined 4 months ago
cake
Cake day: December 4th, 2024

help-circle
  • Went ahead and looked up the original site and article in question: it’s not worth it. This person exclusively holds a stance of “O RLY??? (arms crossed, eyebrow raised)” to anything after the ultraviolet catastrophe’s resolution. They don’t have any solutions to the questions science has been trying to resolve. They just want to call the scientific community a bunch of quacks. They’re an anti-intellectual.

    If anyone wants to read the article and make their own comments, feel free. I will not be linking it because it does not deserve platforming, just like all the other unsubstantive ideas that die in darkness.

    EDIT: After also looking through the other articles, I do not in the slightest doubt that this article was AI slop. It reads like a bunch of summaries plucked out of Wikipedia. The other articles in question are: “AI Patent Assistance”, “Framework for LLM-Assisted Innovation and Strategy”, " Perceptron to Quantum AI", “Novel Approach to AI Benchmarking”, " Unmasking AI Bias", and “Untapped Potential of Mobile AI”. They also have a bunch more anti-intellectual drivel like " Physicists are Clueless", “Evolution Flaws and Solutions in Quantum Measurement”, and " Exposing the Flaws of Conventional Scientific Wisdom".


  • You’re stance is literally just anti-establishment with no original conjectures. You’re discourse is worthless for furthering physics. Rather than propel the field forward, your dialog exists only to attack current progress and cause hesitation.

    If you actually came up with a single testable claim, even if immediately refuted by evidence, you still would have contributed more to scientific discourse than your rant did just now.

    Science is about finding the best answers for the questions we get from our observations about the world. We do not throw out these answers, no matter how bizarre and unwanted they may be, if they fit the evidence we have. We only seek better ones as we go.



  • That’s a view from the perspective of utility, yeah. The downvotes here are likely also from a ethics standpoint, since most LLMs currently trained are doing so by using other peoples’ work without permission, all while using large amounts of water for cooling, and energy from our mostly coal-powered grid. This is also not mentioning the physical and emotional labor that many untrained workers are required to do when sifting through the datasets of these LLMs, removing unsavory data for extremely low wages.

    A smaller, more specialized LLM could likely perform this same functionality with a much less training, on a more exclusive data set (probably only a couple of terabytes at its largest I’d wager), and would likely be small enough to run on most users’ computers after training. That’d be the more ethical version of this use case.


  • I think it’s important to also use the more specific term here: LLM. We’ve been creating AI automation for years for ourselves, the difference now is that software vendors are adding LLMs to the mix now.

    I’ve hear this argument before in other instances. Ghidra, for example, just had an LLM pipeline rigged up by LaurieWired to take care of the more tedious process of renaming various functions during reverse engineering. It’s not the end of the analysis process during reverse engineering, it just takes out a large amount of busy work. I don’t know about the use-case you described but it sounds similar. It also seems feasible that you could train an AI system on your own system (given you have enough reversed engineered programs) and then run it locally to do this kind of work, which is a far cry from the disturbingly large LLMs that are guzzling massive amounts of data and energy to learn and run.

    EDIT: To be clear, because LaurieWired’s pipeline still relies on normal LLMs which are unethically trained, her pipeline using it is also unethical. It has the potential to be ethical, but currently is unethical.



  • It feels like your making a semantic argument to downplay how tight grip these softwares have on their respective industry markets.

    If you are only ever considered for a job if you have Photoshop experience, and that is the normal treatment across the majority of the industry, that’s a standard that the industry is now holding you to - an industry standard if you will. It does not need to be backed by a governing body for it to still count.

    My current understanding is that you will not get a job at a major CGI company by knowing Blender (though the film ‘Flow’ shows that might change going forward). You have to know softwares like Houdini, 3ds Max, Maya, etc…, if you want to be treated seriously.


  • That entire solution immediately falls apart when the paradigm is patented by the vendor, who immediately sues any competing software using UI elements even vaguely similar to theirs. This has been going on for decades, and the three things that usually happen are that the competitor either gets bought up, sued out of existence, or has to keep their UI different enough that there is little-to-no bleedover between the userbases (and usually starves to death from too little revenue).


  • There is a practice where software companies will either provide their software to schools and colleges for free or will pay schools and colleges to use their software. This leads to the students using this software, learning that software’s sole paradigm, and essentially forces them to use that software going forward because of how difficult it is to shift to another software with a different paradigm. This is Vendor Lock-In. The vendor locks you into their software.

    This leads to all future workers being trained in that software, so of course businesses opt to use that software instead of retraining the employee in another. This contrasts with the idea of what an ‘industry standard’ is. The name suggests that it’s used in the industry because it’s better than other software, but in reality it’s just standard because of lock-in.

    This is how Windows cornered the operating system market - by partnering with vendors to ship their systems with Windows pre-installed.




  • Something that deteriorates the structural integrity of load-bearing frameworks /s.

    Being serious, it’s another programming languages that is gaining popularity. Others can expand on why it’s good. I’ve never used it myself so I can’t comment in good faith. I also don’t have any experience with Rust-bros so I can’t comment on their code quality. I’ve mostly just been watching amused as they fight with the Linux development community.



  • This is kind of erasing the author with your description of the issue. The reason that apps eventually require CLI to complete tasks is because devs think of CLI first and then produce a stop-gap P&CI over top of it. It is explicitly how devs in the Linux environment operate which creates a gap between CLI and P&CI. If apps were developed with P&CI in mind first, with CLI added after, this would not be a problem - and we know this because of every app developed for both Windows and Linux, which lack these gaps in functionality - or lack CLI entirely.

    Your stance also de-emphasizes the difficulty of learning CLI for the first time. It’s not the most difficult thing ever, but it can be fairly frustrating. It’s not something you want to deal with when just trying to unwind after work on your PC, or while you’re trying to do your job at work. I think it’s pretty reasonable most people don’t want to have to learn yet another paradigm just to do what they’ve already figured out how to with a P&CI.

    Being realistic, of course, this paradigm shift is not going to happen. Linux will continue to be only a small portion of total computers used by end users because of this, and various other reasons it’s found unpalatable.

    I’ve heard that KDE and GNOME, however, are both at a level now where P&CIs are all you really need. I have not tried them myself, though.


  • Obviously I’m talking about the DE packages, not the kernel or CLI base. We are talking about windows users switching to linux-based DEs, which are directly trying to compete with Windows and iOS.

    This is not me having issue with CLIs. I’ve been on Linux for decades. I am pointing out the perspective of those that are frustrated with Linux DEs being blatantly unready for mass-adoption, specifically because they expect layman users to learn CLI. See my previous comment and this comment for more details.


  • I was specifically trying to not sound conspiratorial. I’m pointing out that it’s a matter of having learned a paradigm vs having to learn a new paradigm.

    Devs have already gotten used to CLI and very rarely make full P&CI suites because of it. Even if the original Dev only did CLI for the app and someone came back and made a P&CI for that app, those P&CI interfaces are still fairly barebones. This is both a mix of devs knowing how good CLI can be and because it’s all open source volunteer work.

    Layman users of P&CI-focused DEs actively avoid CLI so they don’t have to learn it. This means that using most Linux apps are something to be avoided for most Windows users, making the OS base mostly unusable for them.

    To be clear, when I am talking about P&CI-focused DEs, like windows and iOS, I mean that if you cannot perform an action with the P&CI, then that action essentially does not exist for the average user. Contrast that with Linux DEs, where it’s quite common to have to directly edit configs or use the CLI to perform various actions.

    As a veteran user, CLI does not bother me. I do understand the frustration of those who want some Linux DEs to become as default as Windows and iOS, because lack of P&CI does damage that effort.

    This is not every app in Linux obvi, but the ones that are best at making sure the P&CI is full-flddged, are the apps that develop for windows and iOS as well as Linux - Blender, LibreOffice, Logseq, Godot, etc. The most common offenders are the utility apps, such as those that handle drivers, sound systems, DE functions, etc.



  • It’s not that they are mad others use CLI, it’s that they’re mad that Linux devs regularly stop creating P&CI features, instead opting for CLI with no P&CI equivalent action.

    It’s kind of obvious why - CLI is already very flexible right out of the box, and it takes much less work to add functionality within CLI rather than creating it for the P&CI.

    At the same time, I understand the P&CI folk’s frustration, since one of biggest obstacles to getting more people on Linux is the lack of P&CI solutions, and the fact that many actions on Linux are explained solely via CLI.

    CLI folks have invested the time to use terminals effectively and view overuse of the P&CI as beneath them, and P&CI folks have no interest in dumping time into learning CLI to do something they could do on Windows with P&CI.



  • For anyone who is working in the oil industry: as things get worse and more people are impacted by climate change more and more, you’re who those effected will come after. Not the CEOs high up and out of reach - it’s gonna be you guys operating the rigs, inspecting the pipelines, driving the tankers, operating the plants, and crunching the numbers. It’s going to be you who gets targeted because you’re the ones with the logo stamped on your uniforms.