Yea, I saw LLMs try to ask me questions to test my knowledge (which I didn’t request) and they were so bad, I was almost feeling second-hand embarassement. They would ask a rather obvious question, and provide a tip that was almost the answer itself.
When your exams are all the same questions every year just in different orders and all your exercises are on obsolete software, yeah, it’s a problem. Even worse is when the prof doesn’t even look at your work and just gives a mark based on his feels that day. I did extremely well in that class, but all I learned was I forgot something in an autoexec.bat file, I didn’t realize it went 1 line past the bottom of the screen with a silly broken command
As a public school teacher, part of our training on AI is to use it in our classrooms and to talk about using it, like you would in making a citation.
I do use it to construct rote tasks, especially materials for vocabulary practice which are great but aren’t worth my energy to spend much time on. I always tell the kids when I’ve used AI to create sentences, etc. I think it’s great for them to see usage modelled responsibly.
It would be a disadvantage to deny kids usage altogether, and prompting AI should be explicitly taught as a skill set. Cheating is definitely an issue, but more and more teachers are moving away from rampant computer usage in class and thinking actively about how to forestall such cheating.
software engineering processes (the name is in french, this is my best attempt at a translation)
but many of my classes this semester had ai slop in them lol, one of them had nonsensical ai generated images in the class notes, in another one the teacher used cursor as his IDE to demonstrate stuff…
my classmates are all in on it too, naturally. for one project, one of my teammates announced that he’d already done most of the work! wow, so cool, and so early too! in hindsight i should’ve seen it coming… basically the whole thing was vibecoded and i only noticed at the end when it was time to do minor adjustments (such as fixing major features that were not working lol)
AI is terrible in university. Because there is minimal effort on the students to produce it, and the hours I waste marking it is wasted. Pointing out the errors produces no value since The students didn’t go through the process in the first place, and the machine isn’t listening.
I see the same thing in my day job. Analysts produce effortless reams of bullshit that technical experts like myself have to wade through and proof read. We are seen as the barrier, but the more AI is used the longer the review takes because there was no quality control on the generation of the material.
i can tell you from experience that teachers are using it too lol, to create the homework
one of my university professors admitted that he used copilot to create some questions for our exams of that class 🙃
I tried it for my class, and the questions they come up with is boring, repetitive, and generic.
I feel very sorry for you that you need to endure that.
Yea, I saw LLMs try to ask me questions to test my knowledge (which I didn’t request) and they were so bad, I was almost feeling second-hand embarassement. They would ask a rather obvious question, and provide a tip that was almost the answer itself.
Would this be better than recycling the same 5yrs worth of material?
What’s wrong with recycling material? If it’s decent, it’s still gonna be useful for a new class…
When your exams are all the same questions every year just in different orders and all your exercises are on obsolete software, yeah, it’s a problem. Even worse is when the prof doesn’t even look at your work and just gives a mark based on his feels that day. I did extremely well in that class, but all I learned was I forgot something in an autoexec.bat file, I didn’t realize it went 1 line past the bottom of the screen with a silly broken command
Autoexec.bat… you’re over fifty :p so am I though :(
This was a class in 2012, and yes. I remember putting “win” in my auto exec so I didn’t have to type it lol
As a public school teacher, part of our training on AI is to use it in our classrooms and to talk about using it, like you would in making a citation.
I do use it to construct rote tasks, especially materials for vocabulary practice which are great but aren’t worth my energy to spend much time on. I always tell the kids when I’ve used AI to create sentences, etc. I think it’s great for them to see usage modelled responsibly.
It would be a disadvantage to deny kids usage altogether, and prompting AI should be explicitly taught as a skill set. Cheating is definitely an issue, but more and more teachers are moving away from rampant computer usage in class and thinking actively about how to forestall such cheating.
Then you had a bad professor and the homework was useless. What class was it?
software engineering processes (the name is in french, this is my best attempt at a translation)
but many of my classes this semester had ai slop in them lol, one of them had nonsensical ai generated images in the class notes, in another one the teacher used cursor as his IDE to demonstrate stuff…
my classmates are all in on it too, naturally. for one project, one of my teammates announced that he’d already done most of the work! wow, so cool, and so early too! in hindsight i should’ve seen it coming… basically the whole thing was vibecoded and i only noticed at the end when it was time to do minor adjustments (such as fixing major features that were not working lol)
AI is terrible in university. Because there is minimal effort on the students to produce it, and the hours I waste marking it is wasted. Pointing out the errors produces no value since The students didn’t go through the process in the first place, and the machine isn’t listening.
I see the same thing in my day job. Analysts produce effortless reams of bullshit that technical experts like myself have to wade through and proof read. We are seen as the barrier, but the more AI is used the longer the review takes because there was no quality control on the generation of the material.