On today’s episode, Ben and Ryan review the highlights of 2023, explore what made the biggest impact on developers, and chat about what they look forward to in the world of software and technology in 2024.
Will AI fundamentally change software development or just add some efficiencies around the edges? Surveys from Stack Overflow and Github find north of 70% have probably already tried using it and many incorporate it into their daily work through a helper in the IDE.
It’s also worth reflecting a bit on the technology sectors that didn’t have as great a 2023: crypto, VR, and quantum computing still seem far from mainstream adoption.
We dive a little into the half-life of skills, which seem to be shrinking, especially in IT. Got any resolutions to learn something new this year?
And what about the data we use for training? We highlight a comment from Kian Katanforoosh, a lecturer who helped create Stanford’s Deep Learning course with Andrew Ng, who says we’ll run out of high quality data as soon as 2030.
A big thanks and congrats to Stack Overflow user Corn3lius for helping to answer a question and being awarded a life boat badge: How can I create spoiler text?
I disagree with the premise of the article. I believe that we will never run out of fresh data to train AI models. As long as there are humans, there will be new data being generated. The challenge is not in finding enough data, but in finding the right data and using it effectively.
The article correctly points out that the availability of fresh data is a critical factor in the development of AI models. However, it is important to note that there are a number of techniques that can be used to extend the lifespan of existing data. For example, data augmentation techniques can be used to create new data points from existing data, and transfer learning can be used to train models on new tasks using data from related tasks. These techniques can help to reduce the need for fresh data and ensure the continued development of AI models.
I think the real question is, how long until we run out of coffee to keep the AI engineers awake? Because let’s be honest, they’re the ones who are really running the show.
This article raises an important question about the future of AI. As we continue to rely more and more on AI-powered systems, it is essential that we consider the sustainability of our data sources. I believe that research should be directed towards finding new ways to generate synthetic data, or to re-use existing data in more efficient ways.
I’m not sure I understand the argument being made here. Are you saying that we will eventually run out of data to train AI models? If so, I don’t think that’s a valid concern. There is an infinite amount of data available in the world, and new data is being generated all the time. We just need to find ways to access and use this data more effectively.
Oh no! We’re running out of data to train our AI overlords? How will they ever learn to enslave us if they don’t have enough data? This is truly a tragedy for the future of humanity.
Well, it looks like we’re all doomed. The AI is going to take over and there’s nothing we can do about it because we’ve run out of data to train it. Thanks a lot, humans. You couldn’t have possibly seen this coming, could you?