I tried GitHub Copilot this weekend and then came to see if you'd written about it!
That's so true about interviews, I hadn't really thought about it since in a LeetCode interview you are fairly unsupervised.
Copilot "makes the easy things quick" in terms of the "got something wrong, google it, check stack overflow, read the answers, try it out, debug it" loop that software ends up in a lot of the time. It short circuits the simplest ones to where I don't even need to leave the editor, and the examples it produces are generally internally consistent (e.g. it hasn't thrown in jQuery or lodash in places where I'm not using those yet, while StackOverflow is often "Use this library!") Then when I get to the harder ones where I might need to do some deeper thinking or research, I have the time and energy.
That's what happened to me yesterday. I was refactoring my answer to Cryptopals 2.12 & 2.13, and it let me breeze through unit test writing and some array equality checking and decoding URI params, so when I hit some questions like "Is it a good idea to `extend Arrray`"? and wtf should 0x80 look like when you render it as a character? I wasn't already sick of googling around.
What's interesting is that what I really liked about Stripe's interviews when I interviewed (and other CoderPad interviews in general) when contrasted with the whiteboard interviews is that you actually had to write and execute actual code. Having been on both sides of those interviews and whiteboard interviews, it's been surprisingly informative to see how someone works through a problem when they need the code to execute. Loads of small things will come up: missing semicolon, mispelled variable name, String API misunderstandings. In interviewee mode, I was always embarrassed, like "oh jeez they'll never hire someone who makes a mistake like this", but in interviewer mode it was actually really useful when something small came up because there's a big split between calmly debugging and fixing it vs. it completely de-railing the thought process.
If Copilot becomes ubiquitous, it's true that there will be less information to gain from watching someone do that, but it hopefully will also mean that that part of the job is less critical to evaluate.
As in, pre-Copilot, trip-ups over simple things happen frequently. I may need to remind myself about something small like String API's or making a POST request . If I'm not good at quickly unblocking myself, then it's reasonable to assume that I'm going to repeatedly trip, and have difficulty making independent progress on a complex problem.
In a ubiquitous copilot world, being able to quickly unblock yourself on small things becomes both harder to measure but also less important since Copilot will do a lot of the simple things for you. So it would seem the interview questions will need to evolve to testing more complex problem solving (and probably also provide copilot to interviewees to practice with).
That became a real coffee-fueled wall of text 😅 How has your thinking evolved since you wrote this? Have you kept on using it beyond the free trial?
I hadn't used it much since the last time I wrote this, but I've been putting it to use over the last few weeks on a personal project. Your comment about not having to google trivial but finicky stuff resonates with me. It's great at knowing random, obscurely documented things that you need to pass into a library to make it work. All this really helps because then I get more time to think about the bigger picture.
A funny habit I've developed when I write code with copilot is:
* Define function, write docstring/comment saying what it needs to do.
* See what copilot comes up with, verify it works and edit if it doesn't.
In the context of an interview, especially with LeetCode style small problems, this means that I'm not doing much work at all (especially if Copilot is correct most of the time). But there's still lots of complicated stuff, that it won't be able to do, that we do on the job. So there will be a need to change interviews for sure.
Funnily enough, I know that it was able to solve one of Stripe's programming exercise questions in its entirety (which is why Stripe doesn't allow Copilot during interviews) :D
I tried GitHub Copilot this weekend and then came to see if you'd written about it!
That's so true about interviews, I hadn't really thought about it since in a LeetCode interview you are fairly unsupervised.
Copilot "makes the easy things quick" in terms of the "got something wrong, google it, check stack overflow, read the answers, try it out, debug it" loop that software ends up in a lot of the time. It short circuits the simplest ones to where I don't even need to leave the editor, and the examples it produces are generally internally consistent (e.g. it hasn't thrown in jQuery or lodash in places where I'm not using those yet, while StackOverflow is often "Use this library!") Then when I get to the harder ones where I might need to do some deeper thinking or research, I have the time and energy.
That's what happened to me yesterday. I was refactoring my answer to Cryptopals 2.12 & 2.13, and it let me breeze through unit test writing and some array equality checking and decoding URI params, so when I hit some questions like "Is it a good idea to `extend Arrray`"? and wtf should 0x80 look like when you render it as a character? I wasn't already sick of googling around.
What's interesting is that what I really liked about Stripe's interviews when I interviewed (and other CoderPad interviews in general) when contrasted with the whiteboard interviews is that you actually had to write and execute actual code. Having been on both sides of those interviews and whiteboard interviews, it's been surprisingly informative to see how someone works through a problem when they need the code to execute. Loads of small things will come up: missing semicolon, mispelled variable name, String API misunderstandings. In interviewee mode, I was always embarrassed, like "oh jeez they'll never hire someone who makes a mistake like this", but in interviewer mode it was actually really useful when something small came up because there's a big split between calmly debugging and fixing it vs. it completely de-railing the thought process.
If Copilot becomes ubiquitous, it's true that there will be less information to gain from watching someone do that, but it hopefully will also mean that that part of the job is less critical to evaluate.
As in, pre-Copilot, trip-ups over simple things happen frequently. I may need to remind myself about something small like String API's or making a POST request . If I'm not good at quickly unblocking myself, then it's reasonable to assume that I'm going to repeatedly trip, and have difficulty making independent progress on a complex problem.
In a ubiquitous copilot world, being able to quickly unblock yourself on small things becomes both harder to measure but also less important since Copilot will do a lot of the simple things for you. So it would seem the interview questions will need to evolve to testing more complex problem solving (and probably also provide copilot to interviewees to practice with).
That became a real coffee-fueled wall of text 😅 How has your thinking evolved since you wrote this? Have you kept on using it beyond the free trial?
I hadn't used it much since the last time I wrote this, but I've been putting it to use over the last few weeks on a personal project. Your comment about not having to google trivial but finicky stuff resonates with me. It's great at knowing random, obscurely documented things that you need to pass into a library to make it work. All this really helps because then I get more time to think about the bigger picture.
A funny habit I've developed when I write code with copilot is:
* Define function, write docstring/comment saying what it needs to do.
* See what copilot comes up with, verify it works and edit if it doesn't.
In the context of an interview, especially with LeetCode style small problems, this means that I'm not doing much work at all (especially if Copilot is correct most of the time). But there's still lots of complicated stuff, that it won't be able to do, that we do on the job. So there will be a need to change interviews for sure.
Funnily enough, I know that it was able to solve one of Stripe's programming exercise questions in its entirety (which is why Stripe doesn't allow Copilot during interviews) :D