> As a teacher, I can tell you that students get really angry if you put a question on an exam that requires a concept not explicitly covered in class. Of course, if you work as an engineer and you’re stuck on a problem and you tell your boss it cannot be solved with the ideas you learned in college… you’re going to look like a fool.
Very flawed comparison. At work I get to go off and do research, experiments, can collaborate with peers and people who might have more expertise in a given sub problem, and generally have much more time. An exam trying to test you on material you haven’t studied is supposed to test for what? Your ability to synthesize knowledge out of thin air.
The rest of the article is well written and correct, but this particular aside felt weird.
I took an exam once and there was a question on it that wasn't covered. I had vaguely remembered something from somewhere else and was able to piece together the formula and actually got the correct answer. When the test marks came back, the prof apologized about the question and didn't count in the grades. Myself and two other people in the class who got the answer were held after class and accused of cheating. That is the school system. They do not award creativity or ingenuity.
Nope, and nope. I've had tons of such questions during university in various courses, so what? Are they supposed to to babysit you and handle you only exact stuff they went over and over, to not upset some fragile minds? Such stuff is part of educational life. They expected to synthesize the knowledge out of other, related knowledge that was very directly covered, and use a bit of if-this-then-that. All in trivial manner if you actually grokked the subject well enough.
I failed some, did well on others, felt a bit of unfairness emotions completely understandable for that young mind which just wanted to do the test and move on. But when looking back, they were good questions, testing my actual skills in topic I was studying.
I agree that it's a flawed comparison but it does touch on a very real issue in the workplace. The difference between employees who I can send away on a problem and they'll encounter issues and try to solve them, vs those who come back to the senior at any problem, presents a serious ceiling on the level those employees can work at.
As a leader, I can just tell people literally that: if you can't go away and work independently on a problem, it puts a ceiling on how well you're going to do. Then I ask people what they've tried before asking me.
Just being upfront with people can break low performance patterns of behavior.
Even then I would call it bad teaching, as the usual mode is:
New stuff gets explained in class by the teacher - the details are for self study.
This is how it was communicated to me in school, so I would call it part of the "contract".
But giving a homework on a fundamental new subject without explicitely telling that this is what it is about, will rightfully confuse, as the normal mode is different.
And when it happened to me, it always happened because the teacher messed up and got confused themself what they had and have not teached before.
Teaching to self study is a valuable skill in itself that should not be taught by accident, but with a clear pedagogic concept.
>> Your ability to synthesize knowledge out of thin air
As someone who graduated high school, I'd hope my more accomplished peers would know the difference between hypothesis, theory and proof. It is entirely possible, and useful, to test someone's ability to form a cogent hypothesis. If you were faced with a question beyond the scope of the ideas you were taught, and could not rely on any assistance, the only useful thing to know about you is how well you would handle it yourself.
If you would synthesize knowledge out of thin air, that would be a failing grade.
The while premise of "learn some stuff them take an exam on exactly that stuff" is pretty flawed, and that's the point. So much of the academic structure is about what's convenient for evaluation, rather than what's best for learning. Why not get rid of the exam and replace it with something else entirely? Who says we have to have exams at all?
I think it depends on the question. If it's not a question of the form explicitly presented before, but answerable with a minute of thinking using the knowledge the student has already mastered, then it makes sense.
A time limited exam is probably the wrong place for that, though, due to the stress interfering with that kind of thinking. It would be better for a homework assignment.
If ChatGPT didn't exist.
Okay, maybe in class, on paper is the right place for that.
I feel like it's an argument for the benefits of abstract reasoning. I don't think they are saying it'll be like that in the real world, I think they just want to test how you do under adverse conditions.
Stress testing the student's academic prowess, if you will.
I found the whole article to be a bit heavy on anti-academia. And I went to industry after undergrad.
It's a false dichotomy between the "thinkism" bogeyman (actually reading books and papers and putting work into theoretical design is just bad now? Have they tried building anything in the physical world? Checked in with nuclear physics, ever?) and hands-on experience. Both are important. It should be about balance, not trashing an incredibly valuable set of tools because others exist...
I am team academia more than hands-on experience. And I have 5 years of experience. To me, it felt like most SWE things could eventually be solved by what I learned at school.
Not everything I did I learned at school, such as navigating codebases with more than a million lines of code. But most things? Yea.
With that said, I am curious how people say that they learned much more through experience, what did you specifically learn?
Could you tell us what you learned at school that is useful for your SWE career? Im sure there is a lot of us that learned 0 at school and learned everything ourselves as kids on the internet
The problem with "learned everything ourselves" is that you might have niche interests and you miss things. Things I learned that probably I wouldn't have by done by myself: computer architecture (memory, buses, cpu-s, instruction set), and related VHDL/Verilog; how complex is synchronization (implementing from scratch synchronization libraries); different programming paradigms (functional languages); compilers & operating systems (kernel modules, etc.); various types of maths (dsp); algorithm complexity analysis.
Some I ended up using more during my career than others, but knowing more definitely reduced my tendency to think "ah, that should be easy".
Edit: feel free to summarize this with an LLM it will be a ginormous comment.
That's a fair question and I'll do my best to answer this. It'll come in an edit. I think it's fair to say: not all courses were created equal in this regard but I'll do it course by course. I studied a bachelor information science but I tweaked my program so close to computer science that I almost daresay it's computer science (if I had 3 courses different, it was). I studied a bachelor in psychology. A two year master in computer science and a one year master called game studies (officially a specialization of information studies, but in practice it wasn't and it really was game studies as a whole field that we studied).
I'll try to do it in order per study program too.
AI-kaleidoscope: general overview of AI algo's. It's a shame we didn't know how to program or that we knew the usefulness of BFS or DFS but we learned it here. Not a useful course due to scaffolding issues (teach programming first).
Business mathematics: partial derivatives, etc.
Problem solving: useless course (teach programming first).
Privacy and security: we didn't learn much about security. We learned a thing or two about privacy. Should've been a TED talk, not a course.
Graph theory: graph visualization (and when not to do it, helped me out as a data analyst later), mathematical proofs, social networks (helped me to actually network a bit), graph algo's (made leetcode easier), not being scared of math notation. This was a really math heavy course as it was taught by someone that studied math in undergrad and grad and then switched to CS as his PhD (and by the time he taught it he was a full professor). He did not skip on the math which was wild since I was under the impression that I did a "business informatics" bachelor so I didn't need to have advanced math as a high school prerequisite. But I definitely needed that here so this course was hard.
Web technology: this was a bit too early but it was a good overview of web programming at the time. I remember being explained what the DOM was and I remember thinking "wtf is the point?" That was because they explained it way too theoretically and simply should've opened firebug or something to really show it.
Language of Logic and Methods of Reasoning: propositional logic and predicate logic. Practically speaking: after this course if statements are not a problem. This was true for me at least.
Pervasive computing: fun course but could've been a TED talk about how tech is used in interesting ways.
Introduction to programming: basic programming stuff in Java. I learned that Java is a terrible language to start programming in. I recommend JavaScript for app/web-oriented people and Python for "just pick a language" people. Nevertheless, while it was a terrible start, it did teach what it needed to teach which was a basic understanding, and skill, in programming.
Empirical methods: the better name is statistics 1. We learned about statistics and we had to program in R. Since our actual programming ability was quite weak it was a double course. It was the second language ever that I had to take seriously and it taught me a lot of programming stuff and statistics stuff at the same time. It helped that I had friends studying psychology at the time as that degree has a lot of stats in it, so I knew the lingo. Almost everyone else was hopelessly lost.
Programming project: no lectures, just one big programming assignment. After this it was expected that the student could write readable code. I failed this course the first time by a hair. And when I came back to it the second year I realized that I failed because my code was unreadable. So I refactored the whole thing and learned how to write much more readable code. We created the game engine for an Othello game. The graphics library was provided by the TA's. So we also learned to program with a library that was way beyond our heads at the time. And that also implicitly teaches you to trust certain abstractions.
Interactive multimedia project: we learned XIMPEL a hypermedia framework, kind of useless. But through XIMPEL we also learned about storyboarding, creating scenes and general video editing. This teacher was super hands off and allowed students to be creative. So I also learned to create simple PHP websites and learned my way around bash a bit. Then I decided to create an upload script where I used PHP to call the ftp command on bash. I thought it was impossible but it wasn't and had to rethink about what web applications could really do. None of these things were formal course requirements but this teacher encouraged this type of explorations so I do credit it to him that "I learned it in school". It's part of these efforts that he also gave me an amazingly high grade as my XIMPEL story graph was a bit meh but my creativity and extra explorations where a 10 out of 10 effort. So he gave me a 9 out of 10 in total. Also deepened my HTML/CSS knowledge.
-----
I'll write the rest later in another comment this is about the first year and I studied 9 of them. As you can see, I was just learning the basics here. But there are a few patterns:
* Some teachers allowed us to work on real stuff if the student chose to (e.g. Interactive Multimedia) and that got way more serious later on as I created an iPhone app for a client when the teacher taught Multimedia Authoring in the master.
* Some courses were quite useless or could've been condensed to a TED talk.
* Some courses taught something useful or semi-useful but it's not up to industry standard. But as you'll see, this will lead up to a level where - while not quite industry standard - makes the gap between industry standard and whatever I did small enough to just make the jump easily by simply applying what I thought was common sense.
Interesting ! What is the name of the schools you attended, if that's not too private ?
In France at "prepa" (2 year intensive courses to prepare for exams for big eng uni) I learned the theory behind computer science (for example how to modelize a regex machine with graph / automata / matrices). That was useful theory to me, but that's just a drop in the ocean of uselessness
I find that many people can learn a lot by doing but then at some point hit a wall and really struggle to recognize that another kind of learning needs to take place to understand a deeper concept.
I think it’s largely there to set up the point that comes after, which is that it would be absurd in a professional setting to pronounce a problem unsolvable because the entirety of your university education doesn’t provide enough information to solve the problem.
From my experience, the boss is usually a complete moron, so who cares. It also creates this unhealthy assumption that the engineer is subservient to the boss.
From my experience the boss does not know things you know, that does not make them a complete moron because they probably know things you don't.
Here's an example, consulting at a large Danish company, every Friday morning all departments in this big building would share breakfast and the bosses would say some things.
So this one morning they explained that in the coming months people should register time in a particular way because of accounting and how it related to a particular government grant and money that needed to be used up by a particular time in order to get to the next step of blah blah blah.
I realized as my eyes glazed over, damn this is just the same reaction people who don't understand browser rendering engines get when I start telling them about different events.
I also noticed other clueless people gamely trying to question these finance nerds on how things worked, and the patient finance nerds explaining some detailed bit and the clueless person clearly out of their depth with that "uuuuhhhh, hope they don't ask me if I understand" look on their face.
Now, if it hadn't been for them explaining this stuff I would have gone around thinking the boss is a complete moron. I once saw him mistake a nail gun for a drill! He doesn't understand how search engines work and why stemming and decompounding might be important, I know because I tried to explain to the idiot one time!! But since he actually talked about his work for a bit I realized he just happens to know stuff I don't.
I'm betting most of the morons you know are maybe not quite so stupid, although probably not as forthcoming as why things need to be done in a certain way to those who work under them.
The author is talking about two orthogonal problems.
1. "Thinkism": As described, over-engineering before writing code for a complex system and seeing where it takes you. Maybe decision by committee, or just overthinking. But its like one form of replacing on-the-ground adaptable, creative thinking, with a dumber process.
2. Which should be completely separate, it's saying that students are mad if they're forced to think for themselves. This is a complaint about underthinking and the tendency of inexperienced coders not to come up with a grand plan before writing a line of code.
So which one is the problem? I'd say the problem is not knowing when to over or under-think something.
As a kid I noticed that repairing things is the perfect way to combine experiential learning and "thinkism" - you have to develop a mental model of how something should work, what's broken, and how to fix it. Then you combine that with the physical sensations of how tight the nut is, or how hard you need to turn that wrench - which in turn feeds into the mental model and determination of next steps.
Working experiments constrain our thinking from spiralling out. This is the true hindsight bais, a good one at that. When we have a working prototype, we can try to break it in many and find out lot about the underlying theory. Theoretical Physicists do this all the time with toy models.
Did not know of the "thinkism" expression. When I was studying in France eng. school, I called that "the mythe du cerveau" (literaly "the brain myth", though does not roll on your tongue as well).
It is guaranteed failure mode of large orgs. Curious to hear about more references on how to fight this at an organization level, besides the one given in the OT.
Thank you for this term. In my view, the belief that AGI singularly will rapidly destroy us because it will think 10,000 times faster than us is a form of thinkism.
> As a teacher, I can tell you that students get really angry if you put a question on an exam that requires a concept not explicitly covered in class.
Well then I think you omitted a rather important topic in your teaching: that the purpose of teaching is to provide a toolkit with which the student can extend their abilities.
I liked the article and the term thinkism (which I hadn’t heard before). I think education should be radically changed to be about doism instead. I think it’s likely we have more engaged kids learning more valuable life skills.
Reminds me of https://xkcd.com/927/ - and to avoid confusions: everybody has good intentions and think they know better.
We definitely should try to improve and experiment with any system, including education, but I really doubt it is that easy to improve education and it will depend on objective, culture and political environment more than doing A or B.
Like everything there’s always a balance. Sometimes building something and seeing how it works might have a higher cost to “correct” once built. Other times, it’s much faster to build.
Very flawed comparison. At work I get to go off and do research, experiments, can collaborate with peers and people who might have more expertise in a given sub problem, and generally have much more time. An exam trying to test you on material you haven’t studied is supposed to test for what? Your ability to synthesize knowledge out of thin air.
The rest of the article is well written and correct, but this particular aside felt weird.
I failed some, did well on others, felt a bit of unfairness emotions completely understandable for that young mind which just wanted to do the test and move on. But when looking back, they were good questions, testing my actual skills in topic I was studying.
Just being upfront with people can break low performance patterns of behavior.
New stuff gets explained in class by the teacher - the details are for self study. This is how it was communicated to me in school, so I would call it part of the "contract".
But giving a homework on a fundamental new subject without explicitely telling that this is what it is about, will rightfully confuse, as the normal mode is different.
And when it happened to me, it always happened because the teacher messed up and got confused themself what they had and have not teached before.
Teaching to self study is a valuable skill in itself that should not be taught by accident, but with a clear pedagogic concept.
As someone who graduated high school, I'd hope my more accomplished peers would know the difference between hypothesis, theory and proof. It is entirely possible, and useful, to test someone's ability to form a cogent hypothesis. If you were faced with a question beyond the scope of the ideas you were taught, and could not rely on any assistance, the only useful thing to know about you is how well you would handle it yourself.
If you would synthesize knowledge out of thin air, that would be a failing grade.
A time limited exam is probably the wrong place for that, though, due to the stress interfering with that kind of thinking. It would be better for a homework assignment.
If ChatGPT didn't exist.
Okay, maybe in class, on paper is the right place for that.
Stress testing the student's academic prowess, if you will.
It's a false dichotomy between the "thinkism" bogeyman (actually reading books and papers and putting work into theoretical design is just bad now? Have they tried building anything in the physical world? Checked in with nuclear physics, ever?) and hands-on experience. Both are important. It should be about balance, not trashing an incredibly valuable set of tools because others exist...
Not everything I did I learned at school, such as navigating codebases with more than a million lines of code. But most things? Yea.
With that said, I am curious how people say that they learned much more through experience, what did you specifically learn?
Some I ended up using more during my career than others, but knowing more definitely reduced my tendency to think "ah, that should be easy".
That's a fair question and I'll do my best to answer this. It'll come in an edit. I think it's fair to say: not all courses were created equal in this regard but I'll do it course by course. I studied a bachelor information science but I tweaked my program so close to computer science that I almost daresay it's computer science (if I had 3 courses different, it was). I studied a bachelor in psychology. A two year master in computer science and a one year master called game studies (officially a specialization of information studies, but in practice it wasn't and it really was game studies as a whole field that we studied).
I'll try to do it in order per study program too.
AI-kaleidoscope: general overview of AI algo's. It's a shame we didn't know how to program or that we knew the usefulness of BFS or DFS but we learned it here. Not a useful course due to scaffolding issues (teach programming first).
Business mathematics: partial derivatives, etc.
Problem solving: useless course (teach programming first).
Privacy and security: we didn't learn much about security. We learned a thing or two about privacy. Should've been a TED talk, not a course.
Graph theory: graph visualization (and when not to do it, helped me out as a data analyst later), mathematical proofs, social networks (helped me to actually network a bit), graph algo's (made leetcode easier), not being scared of math notation. This was a really math heavy course as it was taught by someone that studied math in undergrad and grad and then switched to CS as his PhD (and by the time he taught it he was a full professor). He did not skip on the math which was wild since I was under the impression that I did a "business informatics" bachelor so I didn't need to have advanced math as a high school prerequisite. But I definitely needed that here so this course was hard.
Web technology: this was a bit too early but it was a good overview of web programming at the time. I remember being explained what the DOM was and I remember thinking "wtf is the point?" That was because they explained it way too theoretically and simply should've opened firebug or something to really show it.
Language of Logic and Methods of Reasoning: propositional logic and predicate logic. Practically speaking: after this course if statements are not a problem. This was true for me at least.
Pervasive computing: fun course but could've been a TED talk about how tech is used in interesting ways.
Introduction to programming: basic programming stuff in Java. I learned that Java is a terrible language to start programming in. I recommend JavaScript for app/web-oriented people and Python for "just pick a language" people. Nevertheless, while it was a terrible start, it did teach what it needed to teach which was a basic understanding, and skill, in programming.
Empirical methods: the better name is statistics 1. We learned about statistics and we had to program in R. Since our actual programming ability was quite weak it was a double course. It was the second language ever that I had to take seriously and it taught me a lot of programming stuff and statistics stuff at the same time. It helped that I had friends studying psychology at the time as that degree has a lot of stats in it, so I knew the lingo. Almost everyone else was hopelessly lost.
Programming project: no lectures, just one big programming assignment. After this it was expected that the student could write readable code. I failed this course the first time by a hair. And when I came back to it the second year I realized that I failed because my code was unreadable. So I refactored the whole thing and learned how to write much more readable code. We created the game engine for an Othello game. The graphics library was provided by the TA's. So we also learned to program with a library that was way beyond our heads at the time. And that also implicitly teaches you to trust certain abstractions.
Interactive multimedia project: we learned XIMPEL a hypermedia framework, kind of useless. But through XIMPEL we also learned about storyboarding, creating scenes and general video editing. This teacher was super hands off and allowed students to be creative. So I also learned to create simple PHP websites and learned my way around bash a bit. Then I decided to create an upload script where I used PHP to call the ftp command on bash. I thought it was impossible but it wasn't and had to rethink about what web applications could really do. None of these things were formal course requirements but this teacher encouraged this type of explorations so I do credit it to him that "I learned it in school". It's part of these efforts that he also gave me an amazingly high grade as my XIMPEL story graph was a bit meh but my creativity and extra explorations where a 10 out of 10 effort. So he gave me a 9 out of 10 in total. Also deepened my HTML/CSS knowledge.
-----
I'll write the rest later in another comment this is about the first year and I studied 9 of them. As you can see, I was just learning the basics here. But there are a few patterns:
* Some teachers allowed us to work on real stuff if the student chose to (e.g. Interactive Multimedia) and that got way more serious later on as I created an iPhone app for a client when the teacher taught Multimedia Authoring in the master.
* Some courses were quite useless or could've been condensed to a TED talk.
* Some courses taught something useful or semi-useful but it's not up to industry standard. But as you'll see, this will lead up to a level where - while not quite industry standard - makes the gap between industry standard and whatever I did small enough to just make the jump easily by simply applying what I thought was common sense.
In France at "prepa" (2 year intensive courses to prepare for exams for big eng uni) I learned the theory behind computer science (for example how to modelize a regex machine with graph / automata / matrices). That was useful theory to me, but that's just a drop in the ocean of uselessness
Here's an example, consulting at a large Danish company, every Friday morning all departments in this big building would share breakfast and the bosses would say some things.
So this one morning they explained that in the coming months people should register time in a particular way because of accounting and how it related to a particular government grant and money that needed to be used up by a particular time in order to get to the next step of blah blah blah.
I realized as my eyes glazed over, damn this is just the same reaction people who don't understand browser rendering engines get when I start telling them about different events.
I also noticed other clueless people gamely trying to question these finance nerds on how things worked, and the patient finance nerds explaining some detailed bit and the clueless person clearly out of their depth with that "uuuuhhhh, hope they don't ask me if I understand" look on their face.
Now, if it hadn't been for them explaining this stuff I would have gone around thinking the boss is a complete moron. I once saw him mistake a nail gun for a drill! He doesn't understand how search engines work and why stemming and decompounding might be important, I know because I tried to explain to the idiot one time!! But since he actually talked about his work for a bit I realized he just happens to know stuff I don't.
I'm betting most of the morons you know are maybe not quite so stupid, although probably not as forthcoming as why things need to be done in a certain way to those who work under them.
someone, somewhere, at some point, will think i’m a clueless idiot.
we’re all clueless idiots at the end of the day.
1. "Thinkism": As described, over-engineering before writing code for a complex system and seeing where it takes you. Maybe decision by committee, or just overthinking. But its like one form of replacing on-the-ground adaptable, creative thinking, with a dumber process.
2. Which should be completely separate, it's saying that students are mad if they're forced to think for themselves. This is a complaint about underthinking and the tendency of inexperienced coders not to come up with a grand plan before writing a line of code.
So which one is the problem? I'd say the problem is not knowing when to over or under-think something.
It is guaranteed failure mode of large orgs. Curious to hear about more references on how to fight this at an organization level, besides the one given in the OT.
Not everything need to be made so easy to refer, like using three or four of words instead of one..
Thank you for this term. In my view, the belief that AGI singularly will rapidly destroy us because it will think 10,000 times faster than us is a form of thinkism.
Also buildin' stuff! (Which is the best type of doin'.)
Well then I think you omitted a rather important topic in your teaching: that the purpose of teaching is to provide a toolkit with which the student can extend their abilities.
Cuz in real life also its more about "doing" you're physically fixing a clock, or writing code, or designing a building
Doism shouldn't be 100% but it certainly should be more
We definitely should try to improve and experiment with any system, including education, but I really doubt it is that easy to improve education and it will depend on objective, culture and political environment more than doing A or B.
Portal (a puzzle game by valve) had levels built in such a way that it introduced the player to new mechanic, and only then building on top of that
> Thinkism sets aside practice and experience
thinking succeeds experience & precedes practise, its not apart from it