Bandersnatch 2024
GnuBreed 2024
Captain Hate 2023
moon_over_vermont 2023
westminsterdogshow 2023
Ann Wilson(Empire1) 2022 Dave In Texas 2022
Jesse in D.C. 2022 OregonMuse 2022
redc1c4 2021
Tami 2021
Chavez the Hugo 2020
Ibguy 2020
Rickl 2019
Joffen 2014
AoSHQ Writers Group
A site for members of the Horde to post their stories seeking beta readers, editing help, brainstorming, and story ideas. Also to share links to potential publishing outlets, writing help sites, and videos posting tips to get published.
Contact OrangeEnt for info: maildrop62 at proton dot me
Programming is hard. Or rather, programming well is hard.
It's rather like painting: Anyone can pick up a brush and do a quick doodle, but Rembrandts are far and few between.
It's actually worse than painting: A painting just has to be pleasing to the eye to be passable (it requires more to be great, of course). A program has to work. And a program of even moderate complexity can be a machine with half a million interoperating components, every one of which exhibits non-linear response.
FORTRAN was supposed to allow scientists and others to write programs without any support from a programmer. COBOL's English syntax was intended to be so simple that managers could bypass developers entirely. Waterfall-based development was invented to standardize and make routine the development of new software. Object-oriented programming was supposed to be so simple that eventually all computer users could do their own software engineering.
None of that happened, because programming is a fairly specific skill.
What did happen is that programmers could use these new tools to accomplish more complicated tasks more quickly.
We've introduced more and more complexity to computers in the hopes of making them so simple that they don't need to be programmed at all. Unsurprisingly, throwing complexity at complexity has only made it worse, and we're no closer to letting managers cut out the software engineers.
ChatGPT - or its open-source successors, like ArbitraryCamelid-7B7 - could make a difference in certain areas such as feature tests and pen-testing. But LLMs won't and can't by their nature replace programmers, because they don't understand what they are doing in the first place.
The LLMs, I mean. Often the programmers too, but the distinction is, not always.
We'd require a different, older, and harder form of AI to do that, and right now nobody is even looking in that direction.
That is, most individuals have neither the interest nor the aptitude - the two very often go together. If you do have the interest, you can probably learn.
You can annotate laws with code that provides a mathematically precise definition of the requirements and outcomes of the text.
The problem with this is that (a) legislators can't code, (b) lawyers can't code, (c) judges can't code, and (d) laws tend to be deliberately vague. What good is a law if you can't abuse it to your own benefit?
I mostly program in Python these days. I'm about to embark on a project that will require the use of C++, which is rather like swapping an electric chainsaw for a lump of obsidian.
The examples shown here - old and new alike - are hopelessly antiquated nonsense. Not the fault of the author but of the language itself.