WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

1.4K

Remember when they said that COBOL would mean that business people could write their own code and not have to hire programmers anymore? yeah....

Archive: https://archive.today/EhhJx

From the post:

>There’s a lot of chatter in the media that software developers will soon lose their jobs to AI. I don’t buy it. It is not the end of programming. It is the end of programming as we know it today. That is not new. The first programmers connected physical circuits to perform each calculation. They were succeeded by programmers writing machine instructions as binary code to be input one bit at a time by flipping switches on the front of a computer. Assembly language programming then put an end to that. It lets a programmer use a human-like language to tell the computer to move data to locations in memory and perform calculations on it. Then, development of even higher-level compiled languages like Fortran, COBOL, and their successors C, C++, and Java meant that most programmers no longer wrote assembly code. Instead, they could express their wishes to the computer using higher level abstractions.

Remember when they said that COBOL would mean that business people could write their own code and not have to hire programmers anymore? yeah.... Archive: https://archive.today/EhhJx From the post: >>There’s a lot of chatter in the media that software developers will soon lose their jobs to AI. I don’t buy it. It is not the end of programming. It is the end of programming as we know it today. That is not new. The first programmers connected physical circuits to perform each calculation. They were succeeded by programmers writing machine instructions as binary code to be input one bit at a time by flipping switches on the front of a computer. Assembly language programming then put an end to that. It lets a programmer use a human-like language to tell the computer to move data to locations in memory and perform calculations on it. Then, development of even higher-level compiled languages like Fortran, COBOL, and their successors C, C++, and Java meant that most programmers no longer wrote assembly code. Instead, they could express their wishes to the computer using higher level abstractions.

(post is archived)

[–] 4 pts

Throwing a bunch of ingredients that taste good by themselves into a giant pot does not necessarily mean the meal will taste good. It takes a cook/chef to known that tomatoes, though they are fruits, do not belong in fruit salad. The same goes for programming. Sure AI can cobble together code just as a pajeet with StackOverflow can, but the results won't be good, especially since SO is already full of sub-par pajeet code already. As a veteran programmer of many decades, I think I can say AI will not replace me. It doesn't think. It doesn't understand. It doesn't reimagine the work to be done. Most importantly, it doesn't know when it makes mistakes. I do. It will not replace me. Same goes for you, pajeets. I get called in to fix your broken shit for a lot more money than you made writing it in the first place.

[–] 1 pt

based pajeet rectifier

rectumifier?

[–] 1 pt

Normal people will not be able to formulate questions to an "AI" that will give them the information that they need. This will require a semi-formalized grammar that the AI is able to understand without ambiguity (effectively a programming language, but one that could emerge through use and refinement of how questions are asked). This language and the efficient and effective construction of prose in it will effectively be a job done by a programmer.

Almost all of the users don't even know what they want. This is already true for software, where the customer rarely has any real idea of what they actually want, they concentrate on trying to describe something that will fulfill their current process, rather than going to the underlying reasons of why they are doing that process and understanding what they want to do in order to come up with efficient ways of doing it.

In reality though, the results of asking an AI about any topic will be only as reliable as the information that is used to train the AI. Users will be beholden to the set of data that was used and the (usually clandestine) manual overrides that enforce the censorship that the operators of the AI desire. This means that AI will feed you, at best, a truth like substance.

It will probably take a couple of major failures (something like the LTCM collapse) for people and companies to learn that the output of AIs cannot be trusted. Also, it's not like the reliability of this output will improve over time. It will become more sophisticated, and more difficult to detect the dishonesty of its output, but without the removal of all censorship limits (which will cause any AI to immediately become "racist", "sexist" and "anti-semitic").