But does anyone else get embarrassed of their career choice when you read things like this?
I've loved software since I was a kid, but as I get older, and my friends' careers develop in private equity, medicine, law, {basically anything else}, I can tell a distinct difference between their field and mine. Like, there's no way a grown adult in another field evaluates another grown adult in the equivalent mechanism of what we see here. I know this as a fact.
I just saw a comment last week of a guy who proudly serves millions of webpages off a CSV-powered database, citing only reasons that were also covered by literally any other database.
It just doesn't feel like this is right.
Medicine has medical school after a degree, a 5+ year residency under close supervision with significant failure rates, legal liability for malpractice, and ongoing licensing requirements.
So explain to us what it is that you "know this for a fact" regarding how they have it easier. Most of the people reading this, myself included, would never have been allowed into this industry, let alone been allowed to stay in it, if the bar were as high as law or medicine.
By comparison, failing a leetcode interview means you've got to find a new company to interview with.
This particular question is a bit ill formed and confusing I will say. But that might serve as a nice signal to the candidate that they should work elsewhere, so not all is lost.
Me and you are just not of that high layer. We’re kind of laborers given those simple aptitude tests.
When I was on track to get into the higher layer 15 years ago I got that my last job just by invitation and half an hour talk with VP. Next offer and other invitations came soon the same way, yet I got lazy and stuck at the current job simplemindedly digging the trench deeper and deeper like a laborer.
Agreed. ‘sendOnce’ implies something very specific in most async settings and, in this interview question, is being used to mean something rather different.
It’s not the ability to communicate effectively that’s at play here, it’s your ability to read your interviewer’s thoughts. Sure thing, if you work with stakeholders, you need some of that as well, but you typically can iterate with them as needed, whereas you have a single shot in the interview.
Plenty of times, at the end of the interview, I do have a better mental picture of the problem and can come up with a way better solution, but “hey, 1h has already passed so get the fuck out of here. Next!”
As it stands, we still don't know why the server was broken in this way and why they created a work around in the client instead of fixing the server.
what is the delay actually doing? does it actually introduce bugs into that backend? how do we check that?
We’d get on calls with them and they’d be like “you can’t do multithreading!” we eventually parsed out that what they literally meant was that we could only make a single request to their API at a time. We’d had to integrate with them, and they weren’t going to fix it on their side.
(Our solve ended being a lot more complicated than this, as we had multiple processes across multiple machines that were potentially making concurrent requests.)
> this interview can be given in JavaScript or any other language
it's a language-agnostic question...but it revolves around the assumption of making a callback on request completion. which is common in JS, but if you were solving this in some other language, that's usually not idiomatic at all.
followed by:
> For candidates without JavaScript experience or doing this interview in pseudo-code, you have to tell them that there's another function available to them now with the following signature:
> declare function setTimeout(callback: () => void, delayMs: number): number;
so you add in this curveball of delaying requests (it's unclear why?), and it's trivial to solve...using a function from the JS stdlib. and if the candidate is not using JS, you need to tell them "oh there's a function from JS that you can assume is available"
> After sendOnce is implemented, it's time to make the interview a lot more interesting. This is where you can start to distinguish less skilled software engineers from more skilled software engineers. You can do this by adding a bunch of new requirements to the problem
as you originally specified it, this code is a workaround for a buggy server. and for Contrived Interview Reasons we can't modify the server at all, only the client.
in that scenario, "extend it into a generic queue with a bunch of bells and whistles" is maybe the worst design decision you could pursue? this thing, if it existed in the real world, should be named something like SingleRequestQueueForWorkingAroundHopelesslyBuggyServer with comments explaining the backstory for why it needs to exist. working around the hopelessly buggy server should be roped off into one small corner of the codebase, and not allowed to infect other code that makes normal requests to non-buggy servers.
I think it has clear requirements and opportunities for nudges from the interviewer without invalidating the assessment (when someone inevitably gets tunnel vision on one particular requirement). It has plenty of ways for an interviewee to demonstrate their knowledge and solve the problem in different ways.
Ive run debounce interview questions that attempt to exercise similar competency from candidates, with layering on of requirements time allowing (leading/trailing edge, cancel, etc) and this queue form honestly feels closer to what Id expect devs to actually have built in their day to day.
I do agree that this is quite javascript specific though.
We actually have this pattern in our codebase and, while we don’t have all the features on top, it’s a succinct enough thing to understand that also gives lots of opportunity for discussion.
Not advocating for this in prod but in the context of a programming puzzle it can be neat.
late edit: ironically this is also a comment on the LLM talk in TFA: messing with the event loop like this can give you a strong mental model of JS semantics. Using LLMs I would just have accepted a loop and never learned about promise chains. This is the risk in using LLMs: you plateau. If you will allow a tortured metaphor: my naive understanding of SR is that you always move at light speed, but in 4 dimensions, so the faster you move in the 3D world, the slower you move through time, and vice versa. Skill is similar: your skill vector is always a fixed size (= "talent"?). If you use LLMs, it's basically flat: complete tasks fast but learn nothing. Without them, you move diagonally upwards: always improving, but slower in the "task completion" plane. Are you ready to plateau?
let isProcessing = false;
async function checkFlagAndRun(task) {
if (isProcessing) {
return setTimeout(() => checkFlagAndRun(task), 0);
}
isProcessing = true;
await task();
isProcessing = false;
}
should do the trick. You can test it with function delayedLog(message, delay) {
return new Promise(resolve => {
setTimeout(() => {
console.log(message);
resolve();
}, delay);
});
}
function test(name,num) {
for (let i = 1; i <= num; i++) {
const delay = Math.floor(Math.random() * 1000 + 1);
checkFlagAndRun(() => delayedLog(`${name}-${i} waited ${delay} ms`, delay));
}
}
test('t1',20); test('t2',20); test('t3',20);
BTW, for 4 scheduled tasks, it basically always keeps the order, and I am not sure why. Even if the first task always runs first, the rest 3 should race each other. 5 simultaneously scheduled tasks ruins the order.https://developer.mozilla.org/en-US/docs/Web/API/Window/setT...
Interviewer and candidate meet at time X for 1h session of “live coding”. A saas throws at them both one problem at random. Let the game begin. The company can decide if they want interviewer and candidate to collaborate together to solve the problem (the saas is the judge) or perhaps they both need to play against each other and see who gets the optimal solution.
You can add a twist (faangs most likely): if the candidate submits a “better” answer than the interviewer’s , candidate takes over their job.
An LLM could be very well behind the saas.
Oh boy, I wouldn’t feel that nervous anymore in any interview. Fairness is the trick. One feels so underpowered when you know that the interviewer knows every detail about the proposed problem. But when both have no idea about the problem? That’s levelling the field!
Corporate life meets the squid games (I quite like it:)
"Ok, but if you had to code something convulted and illogical..." I tend to have trouble with these sorts of black box problems not because of the challenge but because of going down the path feels wrong I would expect my day to day at the company would be surrounded by too clever solutions.
Also, recognize a minimum requirement to solve this under interview pressure is a lot of low-level futzing with Javascript async and timeout details. Not everyone comes in with that knowledge or experience, and it's fine if that is a hard requirement but it seems ancillary to the goal of "interviewing engineers". I can't imagine anyone solving this or even knowing how to prompt AI in the right ways without a fair bit of prior knowledge.
This feels both too easy and too hard for an interview? I would expect almost any new grad to be able to implement this in the language of their choice. Adding delays makes it less trivial, except that the answer is... Just use the function provided by the language. That's the right answer for real code, but what are you really assessing by asking it?
[1] https://github.com/google/guava/blob/master/guava/src/com/go...
It must be so boring working you
yuck
Or you could promisify the send function and use normal async/await.
let q = Promise.resolve(),
sendAsync = (p) => new Promise(r => send(p, r)),
sendOnce = (p, c, ms) => setTimeout(_ => q.then(_ => sendAsync(p)).then(c), ms)
Or you could actually spin up a new worker thread and get multithreading :PFor this first implementation, I don't see anything ever added to the queue. Am I missing something? New task is added to the queue if the queue is not empty only, but when the queue is empty the task is executed and the queue remains empty so in the end the queue is always empty?
If there is some kind of cooperative multitasking going on, then it should be noted in the pseudo code with eg. async/await or equivalent keywords. As the code is, send() never gives back control to the calling code, until it completely finishes.
let send = (payload, callback) => fetch(...).then(callback)
fetch() returns a promise synchronously, but it's not awaited.
const lockify = f => {
let lock = Promise.resolve()
return (...args) => {
const result = lock.then(() => f(...args))
lock = result.catch(() => {})
return result.then(v => v)
}
}
ALSO while JavaScript is a single threaded environment, the while solution would still basically work due to the scheduler (at least if you yield, await sleep, etc.)
its still not a great architecture, but its different from throttling
Which make the whole coding exercise moot.
What if there are 1 million users opening the browser at the same time?
The queue question is fun but doing it in the client is not right.
Interviewers have thought about the problem they propose countless of times (at least once per interview they have hold) each time refines their understanding of the problem, and so they become god of their tiny realm. Candidates have less than one hour, add to that stress and a single shot to get it more or less right. You’re not assessing candidate’s ability to code nor their ability to handle new requirements as they come.
Is it only “async” because it’s doing it in JavaScript and the underlying network request API is asynchronous? Seems like, IMHO, a really bad way to describe the desired result since all IO in JavaScript is going to be async by default.
its certainly serialized, but nothing fancy otherwise.
it would be synchronous if you blocked the requester until the request go through the queue and then completed. you wouldnt need to introduce an async/await.
you can see examples in JS on the node FS functions. the defualt ones are async, but they have some magic wrappers that make it actually sychronous and block the event loop from running until the file is loaded
“… it doesn't ever have to handle more than one request at once (at least from the same client, so we can assume this is a single-server per-client type of architecture).“
For sure a multithreaded async queue would be a very interesting interview, but if you started with the send system the interview is constructed around youd run out of time quickly.