A Fun History of AI (It’s Older Than You Think)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-12-2025 05:42 PM - edited 12-12-2025 06:04 PM
Since I’ve been doing so much reading about how AI handles uploaded images and digital copyright, I ended up wandering into the history of AI too and it turns out, AI is much older than most people realize. I’m not an authority on any of this; I just read a lot, usually late at night while listening to my husband’s breathing pattern and keeping my phone close in case I need to call for help. Research has become a kind of therapy for me, something steady I can focus on during those quiet hours. So here’s the simple, interesting version of what I learned.
Most people think AI started in the last few years, but the idea actually goes all the way back to the 1950s, when early researchers began asking, “Can a machine think?” and built tiny programs that could play checkers or solve simple logic puzzles. Nothing dramatic more like digital experiments. Then came the 1980s, which was AI’s first big moment. But AI back then looked nothing like today’s tools. There was no internet, no image scraping, no giant datasets. AI learned from rules, not images. Humans literally typed in thousands of “if this, then that” instructions. These were called expert systems, and they were basically giant flowcharts pretending to be smart. Early neural networks existed too, but they were tiny more like baby neurons than anything we use today.
The AI we use now the kind that can understand language, generate images, and help with creative work didn’t really take off until the 2010s, when computers finally got fast enough and we had enough digital data to train them. That’s when machine learning and deep learning became possible. And even then, the training wasn’t about copying or storing anyone’s work. It was about learning patterns, the same way a camera learns light or a child learns shapes.
And since creators often ask me what AI “trains on,” here’s the part that helped me understand it better: modern AI models are trained in large batches before they’re ever released to the public. These batches come from a mix of publicly available information, licensed datasets, company‑owned material, and optional user‑contributed data (only if you turn that setting on). Public datasets are simply collections of information that are already out in the open the same kinds of things search engines have been indexing for decades. They help AI learn broad patterns, not store or copy anyone’s work. Your personal uploads aren’t added to these datasets, and the training doesn’t happen every time you use the tool.
AI models don’t keep images or maintain a folder of creator uploads. They don’t store your mockups or remember your designs. They learn mathematical relationships, not artwork. That’s why platforms like Microsoft, Google Gemini, and Canva all publish clear documentation explaining that your uploads are used to process your request and then cleared according to their privacy settings. Training happens long before the tool reaches your screen, and your images aren’t part of that unless you explicitly opt in.
I like knowing this because it reminds me that AI didn’t suddenly appear out of nowhere, and it isn’t quietly absorbing our personal files. It’s been slowly evolving for decades, built on published research, transparent datasets, and clear privacy controls.
I’m not trying to convince anyone to use AI if it doesn’t feel right for them I just like understanding how things work, and I’m sharing what I’ve learned in case it helps someone.
If you ever want to read some of the material I started with, Microsoft has a clear overview here: The History of AI. It walks through the early years, the breakthroughs, and how we got to today’s generative AI. https://www.microsoft.com/en-us/microsoft-365-life-hacks/privacy-and-safety/the-history-of-ai
****
Oh I forgot to add this part, and it always surprises people: AI isn’t new at all. The name artificial intelligence was first used back in 1956, but the ideas behind it were already floating around much earlier. Researchers were sketching out early neural networks in the 1940s, and by 1966 there was already a little chatbot called ELIZA carrying on simple conversations. In 1997, IBM’s Deep Blue beat chess champion Garry Kasparov, and machine learning really started gaining momentum in the 2010s. By the time today’s generative AI showed up, the groundwork had been building for about seventy years…. Fun History of AI ... right?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-13-2025 04:14 AM
I don't believe in AI. For me, it is computer programming. Much more sophisticated than in the past, but, the computer has to have a program / software or hardware program to make it run. It learns what it is taught. What goes in is what comes out. The real trick for the end user is to get the program to give you what you want. So for me AI Art, is an art form of it's own. You have to be very specific with your descriptions to get the results you want. Some programs / software are better than others. Just my two cents on the subject.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-13-2025 01:23 PM
I completely agree getting a prompt to behave is absolutely an art form. I’ve done enough edits and retries to earn an honorary degree in “Why won’t you generate what I asked for?” My post was coming from a slightly different angle, though. I’ve been doing one of my late‑night research deep dives, and I ended up reading about the history of AI. What surprised me is that it isn’t new at all it’s been about 70 years in the making. The link I shared has a great timeline, from early neural networks in the 1940s to ELIZA in the ’60s to Deep Blue in the ’90s. So I was mostly sharing the history and how training actually works today, since a lot of creators think AI suddenly appeared out of nowhere. I just found it interesting even if my research gets a little long sometimes.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-13-2025 02:01 PM
Because I'm old I got to play a bit with ELIZA's doctor program (simulated a psychotherapist) in the early 80's. I had friends with academic access. It did not pass the Turing test (the ability to simulate a human well enough to fool a human), at least for me. But that was >40 years ago. ChatGPT passes the Turing test now, and how.
It feels new because now there is general, non-academic access. If you go back in forum threads just a couple years, you see us talking about it and if it was good enough for template "photos". At the time it hilariously was not and there are some comical images in those old forum threads of attempt to make template photos and how grim they looked.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-13-2025 07:21 PM
It’s funny how fast things shifted. One minute we’re laughing at AI trying to make a believable hand, and the next minute everyone suddenly has access to tools that actually work. But even with all the progress, the same rule still applies: the tech can only take you so far. The creator still has to bring the clarity, proportion sense, and merchandising brain. So yes it feels new, but also not new at all. Just a much shinier version of the same old “let’s see what this thing can do” energy we’ve had for decades.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2025 05:27 AM
Fascinating post - thank you!