https://leetcode.com/problems/determine-if-two-strings-are-close
https://gist.github.com/autotune/e6345fab54f6acab0ba95c45ec15f22a#file-openai_test_prompt-txt
It returned the following result that matched 3/3 test cases:
https://gist.github.com/autotune/e6345fab54f6acab0ba95c45ec15f22a#file-openai_test_result-txt
I didn't even have to do any manual formatting, it just generated the solution right where it needed to. Now that LC has been automated can we move onto trying something else in the standard software dev interview process?
(EDIT: I tried actually submitting it, and it doesn't pass Leetcode's tests, so I'm not sure what OP was talking about.)
For example, the correct answer for closeStrings("aabbcc", "abbbcc") is false, but this solution incorrectly returns true.
In addition, it has O(n^3) time complexity and may take a very long time to run. Try closeStrings("a"*1000+"b", "a"*1000+"c"), and then consider that the problem statement allows each string to be up to 100,000 characters in length.
- Do go into them expecting to see a certain solution.
- Do not penalize people for giving me a different one, if it works. I think that's awesome, personally.
- Do ask people if they can think of additional test cases vs. those provided, and I penalize if there's obvious ones that they don't come up with, or if they just give me a bunch of variations of what we've already provided.
- Do consider whether people actually read and/or understood the prompt (I read it to them). If I've asked you to write a function that returns a boolean, and you write a function that prints 'true' and doesn't return anything, and after some gentle prodding you don't get it, then you didn't solve the problem.
- Do ask for you to talk through your solution and approach so I understand how you're thinking about the problem.
I need to know whether you can code, and more importantly that you understand why you're doing what you're doing. I can't ask your prior employer. Not everyone has open source contributions or a portfolio of work they can show me, and no, I am not going to just look at your resume and say "well, they worked for a software company before so clearly they know what they're doing."
I prefer take home exercises, where there's less manufactured pressure. I think they also lend themselves to a wider variety of challenge vs. the online 1 hour test using coderpad or something, meaning you may be able to ask something more appropriate for the role. But this approach upsets people as well.
I haven't gone through the interview process since 2013, so I might be a bit behind here, but are these challenges still happening under pressure in a highly time constrained environment?
Because if they are, and if you interview me, you will probably not learn whether or not I can code, or if I know why I'm doing what I'm doing.
I know this is a subject that has been discussed to death, but all this tells you is whether the candidate is good at high pressure coding exercises.
I'm on sabbatical and half considering semi-retirement, but economic pressures are pushing me in the direction of seeking work. If there's one thing I do know, it's that I will pass on companies who insist on this kind of exercise.
Let me take it home, and that's a different story. I know that not everyone will be happy with every approach, but time and time again, I've found the high pressure exercises useless, and that includes when I'm interviewing candidates.
One dev team at my former company passed on me because I didn't do well in one of these high pressure exercises. A different team picked me up (they didn't believe in this type of exercise), and I spent over 8 years at the company and was known as a top performer.
I don't think there's a universal definition of 'high pressure,' so I am not sure what else to say about this. It's certainly not a fishbowl experience where you're surrounded by 20 people yelling and pleading for you to save their lives by writing a function to reverse a string in place. And while I've read that this happens elsewhere, we're not critiquing you on the spot and arguing with you about why using this technique causes the interpreter to do some less efficient thing and how could you be so stupid.
We try to coach and offer suggestions though, to help a candidate get to the right answer. That doesn't mean they necessarily passed - if we have to basically give you the answer, then you didn't solve the problem, and the coding interview is one of several inputs into a hiring decision - but I don't like to let candidates leave feeling like they failed if I can help it. I like to find at least one thing to praise about the solutions they turn in and tell them why I liked what they did. It is not the point of these interviews to ruin someone's day or make them feel worthless.
All that said, we can only do so much. Ultimately whether you feel an environment is high pressure is on you, and how you handle it is also on you. We would regret missing out on a good candidate but I'm sure it happens. There are other candidates, and other opportunities for those we've missed out on.
But interviewing, i.e. determining if you want 5k hours work from someone in less than 5 hours, is going to be heuristic and imperfect. Always.
Obviously that is subjective, but this is on everyone involved, especially the company creating said environment. Twitter 2.0 is an obvious (if extreme) example.
The bottom line is that the whole approach is manufactured, and not indicative of real world problem solving skills or what the candidate will be dealing with on the team.
> We would regret missing out on a good candidate but I'm sure it happens.
It absolutely happens, and I’d bet good money that it happens often. Whether they realize it or not, most teams that take this approach end up creating an environment that is skewed towards a particular kind of personality, and that is ultimately a net negative in the long run.
Back to my personal anecdote (which I’ve observed over and over throughout my career), the teams that rely too heavily on this approach end up rather dysfunctional and unbalanced over time. The team that turned me down was disbanded after some horrific security issues came to the surface. They were so focused on whether candidates could solve certain algorithms that they apparently didn’t verify that these devs understood the fundamentals of client-side vs. server-side validation and why they’re important (and not the same thing). But those devs could bang out a chess algorithm on a whiteboard with the best of them.
I’m not saying that’s how you’re running things, but in my experience, teams that overemphasize this are doing so at the expense of far more important indicators. It takes a lot of discipline and care to not let this become a crutch.
I’m also not saying it isn’t important to verify that a candidate has the necessary skills, but having a conversation about person’s learning style, how they got into development, how they go about taking on new tasks, etc can go a long way. I care more about mindset and approach to information acquisition than I do about what a candidate can whip up in a hour.
Sitting down together and going over some existing code (after giving them enough time to absorb it) while having a discussion about the nuances of that code is far more collaborative, and in my experience far more likely to indicate whether or not the candidate knows what’s going on.
I’ve spoken to many like minded and high performing colleagues who will not consider companies that are overly focused on this.
Another colleague was recently offered a position after going through such a process, but bailed when they failed to follow up with enough meaningful conversation. Her conclusion: if that’s all they care about, I don’t trust that they’ve built a good team.
There’s a growing contingent of the types of people who you’d want the most, but will not consider your team for doing this.
Good, because we're not.
The main takeaway here, which I think we agree on, is that a good hiring decision requires more than one input. We happen to use a coding challenge as one input in our decisions, but I agree that a collaborative discussion could work too. In particular, we do ask about background and learning style, among other things, in subsequent interviews.
Overall, if I had to attach a percentage to it, I'd say the coding challenge is no more than 25% of the overall decision, with the provision that if you clearly just don't know what you're doing at all - which has happened just a handful of times that I can remember - we're not going to invest more time in you as a candidate.
> There’s a growing contingent of the types of people who you’d want the most, but will not consider your team for doing this.
All I can say to this is that you're probably right, but also that almost no one is as uniquely qualified as they think they are. There are millions of candidates, and especially now that more software companies are OK with remote work, we can consider more of them than before. This is really good and really bad, the latter because employers are even more likely to :shrug: at this sort of thing and move to an equivalently-capable candidate who doesn't complain about having a one hour coding test.
> In particular, we do ask about background and learning style, among other things, in subsequent interviews.
In the past, I’ve done the opposite, and started with these questions up front. In my mind, they’re the most important thing to learn about a candidate, but I realize not everyone goes about this the same way. It does require more time investment up front though, and I get that.
> I'd say the coding challenge is no more than 25% of the overall decision, with the provision that if you clearly just don't know what you're doing at all - which has happened just a handful of times that I can remember - we're not going to invest more time…
If this is the case, wouldn’t that be evidence that this process should be de-emphasized even further, or at least moved to a later stage?
Correct me if I’m misreading this, but if almost everyone coming through has had some level of success with the coding stage, it’s not really screening many candidates out, but still could be changing the candidate’s view of your team and company. If it’s the first thing they encounter, it’s a big part of the first impression the candidate forms.
In a worst case scenario, you may drive someone good away with minimal benefit to the hiring process, and in a best case scenario, you’ve only saved a few conversations given the low failure rate.
> but also that almost no one is as uniquely qualified as they think they are.
I completely agree with you here, I just don’t believe an artificial coding test is the antidote to that. The more experience and confidence someone has (warranted or not), the less they feel they need to put themselves through pain for their next role.
Really not trying to be difficult here, but trying to understand the mindset and strategy behind this style of hiring, and I’m not quite seeing where the value is coming from based on what you’ve shared.
I will say that what you describe is one of the more reasonable stances I’ve come across, but still seems potentially counterproductive.
I do hear you regarding the access to millions of candidates. It’s a great problem to have, I just wonder if the old ways are still relevant.
Thanks for being willing to share your thought processes on this - it’s what I love about this site.
On the flip-side, how much time do you spend explaining to an interviewee what hurdles they must overcome in order to code in the first place? Cultural blocks? Inefficiencies that don't have an owner or visibility to the greater org?
If your infra is well architected and any random engineer can start contributing within a week to a month, kudos.
In my experience, way too much energy is put into "can they do the job?" when the energy should be in "is this an environment where engineers can do their job?"
For example, I'm on an internal tooling team, and so there's a support component where we have to be accessible and accountable to devs who encounter problems using tools we've created. We have a support rotation where one person a sprint is primarily tasked with being the first contact for people with questions, problems, etc. We've had people on the team before who didn't like this and it showed because they put little effort into it. So now we're very up front during the interviews that this is part of the job; we don't want to hire someone who will be unhappy in the role. It's what we think is the best we can do.
It's hard to answer about cultural blocks, because I haven't experienced those myself. I'm not sure how to even approach that; "hey, I see you're $different, here's some potential challenges you may need to deal with" ? That seems like it is as likely to offend as aid. That said, we do try to get a candidate in front of a variety of people, both to get a sense of how they deal with different people, and to give the candidate an opportunity to get feedback from people in different parts of the organization.
Ultimately you have to realize that any process that is gating people from an opportunity & resources is going to upset people.
People have an amazing capacity to make up elaborate reasons and narratives for things to be unfair, especially when there is a group of them.
you also seem to think that everyone is able to talk while they are coding, which is not the case.
This. In an interview, it doesn't matter. The interview process only selects for correct results, not for an individual's ability to solve problems.
As I said, some of the test cases for this particular problem can be strings up to 100,000 characters long. Do you really think it's faster to solve those by hand than to come up with a correct algorithm to do it for you?
You have AI that is capable of solving your interview questions. These questions were designed to select the best candidates. Now you have AI that is cheaper and faster. Please, fire your engineers and embrace the next generation of programming. Why would you need engineer when a bot will do it for you? After all, your candidates suck at leetcode interview.
What's even cheaper and faster than the AI solution is for us to look up the answer in our own database. It turns out that we already know the answers to all the questions we ask in interviews. We're afraid that you've completely misunderstood that goals of our interviewing process.
I’d reconsider this assumption. My assumption is also wrong but in a different way. I think these questions were designed to (in no particular order):
• Identify hirable candidates
• Be very scalable and allow for the interviewing, rejection, and acceptance of hundreds/thousands of candidates
• Determine who would put the work in to pass this sort of screening
more like do initial screening and filter out garbage candidates?
this implies that hiring people via leetcode interviews will result in staff who can only copy paste and remix standard solutions they have been trained on and are incapable of being creative and generating completely new or unorthodox solutions.
the latter is what most companies need, but don’t know yet they need.
leetcode was the wet dream of executives. a standard factory like process to produce more code drones in a cheap and efficient way. the next wet dream is replacing them with AI. they forget that software development is a creative process.
I guess you're implying that this doesn't happen with humans as well, although if you give a pen to an infant or a man who has always lived in a cave he's not going to draw anything meaningful like an untrained AI, I have the opinion that everything is derivative.
Exactly. Very few executives have read Hackers & Painters [0], from what I've been able to anecdotally personally gather. Nobody (almost) should be fearful that low-code solutions and/or AI will replace their livelihood yet.
[0] - http://www.paulgraham.com/hackpaint.html
This will enable engineers to focus on higher order problems, similar to how compilers and programming languages allowed engineers to be more productive than having to write assembly.
The way that this play out for software development is we may include gpt prompts to our code repositories and treating them as library functions or macros which could be compiled as regular code, or interpreted at runtime.
Every time I have ever been tested in any kind of interview, they 100 percent care more about the solution than how to get there. Some of the honest ones have admitted it, some of the others have pretended they don't to try and make you feel better about not reaching it in time. I have never passed an interview with a live coding round, especially one involving leetcode, where they were fine with me not having reached a solution in the 10, 35, or 45 minute time window given.
INPUT:
Given an m x n integers matrix, return the length of the longest increasing path in matrix.
From each cell, you can either move in four directions: left, right, up, or down. You may not move diagonally or move outside the boundary (i.e., wrap-around is not allowed).
Input: matrix = [[9,9,4],[6,6,8],[2,1,1]] Output: 4 Explanation: The longest increasing path is [1, 2, 6, 9].
class Solution: def longestIncreasingPath(self, matrix: List[List[int]]) -> int:
ChatGPT:
if not matrix or not matrix[0]: return 0
I then asked it to give me a class to convert a Stream to a seekable Stream (something else I needed to do recently), and again, pretty impressed with the result (though I needed to avoid using MemoryStream so it could work with potentially huge streams, I tried to explain that but the solution didn't look right at all - basically the only way you can do it is by storing the data on disk).
https://codeforces.com/contest/1672/problem/D
Leetcode is a hazing ritual and it doesn't matter if AI excels in it, what matters is if human programmers are able to 'perform' in front of an interviewer. So leetcode is here to stay, otherwise you will have these competetive programming problems replacing them and turn interviewing into an even bigger nightmare.
All we need now is a pocket AI device out of one of William Gibson's novels paired to a neurallink and one can solve the hardest interview questions.
That kind of snake eating its own tail will likely accumulate errors over time, and eventually the internet will be just chaos.
I especially liked its solution for two sum; first it gave the O(N^2) solution and then it gave the O(N) solution. With good explanations for both.
Now tell me, how many of the BUGFIX-66 problems can it solve?
https://BUGFIX-66.com
Copilot can solve a few of the simple ones at the beginning (simple matrix multiplication, simple radix sort, etc., that appear often in the training data, and some of the harder ones whose solution appears on GitHub, e.g., the uncorrected prediction/correction compressor/decompressor whose solutions were front-page on Hacker News).
But as you go down the list, into material outside the training set, the language model won't be able to solve it.
Try it. Paste the tile, the introductory text, and the unfixed code, and see whether the omnipotent AI can solve these (quite straightforward) problems. For how many does it need the hint?