The analogy here is poor; reducing thrashing in those obnoxious search completion interfaces isn't like debouncing.
Sure, if we ignore everything about it that is not like debouncing, and we still have something left after that, then whatever is left is like debouncing.
One important difference is that if you have unlimited amounts of low latency and processing power, you can do a full search for each keystroke, filter it down to half a dozen results and display the completions. In other words, the more power you have, the less important it is to do any "debouncing".
Switch debouncing is not like this. The faster is your processor at sampling the switch, the more bounces it sees and consequently the more crap it has to clean up. Debouncing certainly does not go away with a faster microcontroller.
I think it makes sense if you view it from a control theory perspective rather than an embedded perspective. The mechanics of the UI (be that a physical button or text input) create a flaggy signal. Naively updating the UI on that signal would create jank. So we apply some hysteresis to obtain a clean signal. In the day way that acting 50 times on a single button press is incorrect behavior, saving (or searching or what have you) 50 times from typing a single sentence isn't correct (or at least undesired).
The example of 10ms is way too low though, anything less than 250ms seems needlessly aggressive to me. 250ms is still going to feel very snappy. I think if you're typing at 40-50wpm you'll probably have an interval of 100-150ms between characters, so 10ms is hardly debouncing anything.
If you make a web UI in which a button is styled via an icon image (or otherwise) to look like a launchable application or file, those users will double click on it.
If you make it look like a button, they won't; they were certainly not trained to double-click on [OK] or [Cancel] in an OK/Cancel dialog box, for instance!
Double clicking to launch an action on a file makes sense because you need single click for selecting it. There are things you can do with it other than launch, like dragging it to another location.
T;DR: don't make buttons look like elements that can be selected and dragged somewhere?
--
Another reason: I've sometimes multiply clicked on some button-like thing in a web UI because in spite of working fine, it made no indication that had received the click!
It was styled to look like a button in the unclicked state ... and that one image was all it had.
To detect that the action launched you have to look for clues in the browser status areas showing that something is being loaded. Those are often made unobtrusive these days. Gone are the days of Netscape's spinning planet.
When the user sees that a button doesn't change state upon being clicked, yet the normal cursor is seen (pointer, not hourglass or spinning thing or whatever) they assume that the application has become unresponsive due to a bug or performance problem, or that somehow the click event was dropped; maybe their button is physically not working or whatever.
WTF no it won't.
React in particular is data driven so in the above example, if you make the api call on each keypress, and save it into state or whatever, the UI will update automatically. I can type 70 words per minute. Nobody wants the search results to update that fast. (Should we be building searches that work this way? Often you have no choice.) A slow network + a short search string + a not top of the line device like a cheap phone means a really janky experience. And even if it’s not janky, its a waste of your users bandwidth (not everybody has unlimited) and an unnecessary drain on your server resources.
Even though we say “update as the user types” people type in bursts. There’s no reason not to debounce it, and if you can make the debounce function composable, you can reuse it all over the place. It’s a courtesy to the users and a good practice.
So it seems to me that it's entirely about not wasting resources/to be polite. In the limit where you have a computer that can do its work faster than your display refreshes, letting it do so seems to clearly make everything feel snappier.
No one's typing quickly on a phone anyway and most people probably do a word at a time, and that word will come in slower than your denounce, so again there is no point in delaying it.
One thing another user pointed out is that your search does need to be stable. An exact substring match shouldn't randomly get bumped down as you type more.
A separate issue you might encounter for slow queries is that requests might get blocked behind each other such that your new query is queued before you even submitted the previous one. In that case it makes sense to cancel the unsent one (and if very expensive, perhaps when the sent ones), but I don't know that web browsers can tell you whether a request is queued or in-flight or give you meaningful lifecycle hooks. Obviously normal programs have a lot more flexibility in how to handle this. But this is also not the case posited originally, which is low latency/fast processing.
At 20-30ms or more, it starts to make playing unpleasant (but I guess for text input it's still reasonable).
50ms+ and it starts becoming unusable or extremely unpleasant, even for low expectations.
I'm not sure how much the perception of delay and the brain lag differs between audio and visual stimuli.
But that's just about the perceived snappiness for immediate interactions like characters appearing on screen.
For events that trigger some more complex visual reaction, I'd say everything below 25ms (or more, depending on context) feels almost instant.
Above 50ms you get into the territory where you have to think about optimistic feedback.
Point that most seem to miss here is that debouncing in FE is often about asynchronous and/or heavy work, e.g. fetching search suggestions or filtering a large, visible list.
Good UIs do a lot of work to provide immediate feedback while debouncing expensive work.
A typical example: when you type and your input becomes longer with the same prefix, comboboxes don't always need to fetch, they can filter if the result set was already smallish.
If your combobox is more complex and more like a real search (and adding characters might add new results), this makes no sense – except as an optimistic update.
_Not_ debouncing expensive work can lead to jank though.
Type-ahead with an offline list of 1000+ search results can already be enough, especially when the suggestions are not just rows of text.
Sound travels 3.4 meters in that time; if your speakers are that far away in a live situation, there is your extra 10 ms.
I was mainly thinking about keyboards where you have at least some close exposure to a/the sound source.
For example, organ players sure need to incorporate some intuition about the pipe sound latency into their skills (on top of the sound space and reverb!)
https://news.ycombinator.com/item?id=44822183
The state machine will report a "make" event early, and then continue to deal with the bounces to detect "break".
And there's no reason for a keyboard to be using anything different. As I said, the real delay factor with keyboard is matrix scanning. If there is a keyboard that has 30ms of latency to register a keypress, I would guess that a ~400Hz (sqrt(104) -> 11 columns?) scanning frequency was as good as could be handled by whatever early cheap USB microcontroller they used, and its designers figured that was good enough for productivity use.
I've programmed my own keyboards, mice and game controllers. If you want the fastest response time then you'd make debouncing be asymmetric: report press ("Make") on the first leading edge, and don't report release ("Break") until the signal has been stable for n ms after a trailing edge. That is the opposite of what's done in the blog article.
Having a delay on the leading edge is for electrically noisy environments, such as among electric motors and a long wire from the switch to the MCU, where you could potentially get spurious signals that are not from a key press. Debouncing could also be done in hardware without delay, if you have a three-pole switch and an electronic latch.
A better analogy would perhaps be "Event Compression": coalescing multiple consecutive events into one, used when producer and consumer are asynchronous. Better but not perfect.
Whoa!
https://www.digikey.ee/en/articles/how-to-implement-hardware...
But you don't want that, as it's useless. Until the user actually finished typing, they're going to have more results than they can meaningfully use - especially that the majority will be irrelevant and just get in the way of real results.
The signal in between is actually, really not useful - at least not on first try when the user is not aware what's in the data source and how can they hack the search query to get their results with minimal input.
Don't make assumptions about what the user may or may not want to search for.
E.g. in my music collection I have albums from both !!! [1] and Ø [2]. I've encountered software that "helpfully" prevented me from searching for these artists, because the developers thought that surely noone would search for such terms.
_______
[1] https://www.discogs.com/artist/207714-!!! ← See? The HN link highlighter also thinks that URLs cannot end with !!!.
In your example, the developers have exercised poor judgment by making a brittle assumption about the data. That's bad. But there is no UX without some model of the user. Making assumptions about user's rate of perception is much safer (in a web app context, it would be a different story in a competitive esports game).
<https://www.discogs.com/artist/207714-!!!>
Edit: It does. So, this would be yet another of the squillion-ish examples to support the advice "Please, for the love of god, always enclose your URLs in '<>'.". (And if you're writing a general-purpose URL linkifier, PLEASE just assume that everything between those characters IS part of the URL, rather than assuming you know better than the user.)
> Characters can be unsafe for a number of reasons. ... The characters "<" and ">" are unsafe because they are used as the delimiters around URLs in free text
Also note the "APPENDIX" section on page 22 of RFC1738, which provides recommendations for embedding URLs in other contexts (like, suchas, in an essay, email, or internet forum post.)
Do you have standards documents that disagree with these IETF ones?
If you're using the observed behavior of your browser's address bar as your proof that ">" is valid in a URL, do note that the URL
https://news.ycombinator.com/item?id=44826199>hello there
might appear to contain a space and the ">" character, but it is actually represented as https://news.ycombinator.com/item?id=44826199%3Ehello%20there
behind the scenes. Your web browser is pretty-printing it for you so it looks nicer and is easier to read.[0] <https://datatracker.ietf.org/doc/html/rfc3986#appendix-A>
[1] <https://datatracker.ietf.org/doc/html/rfc1738#section-2.2>
URLs with the characters ' ' and '>' in them are not valid URLs. Perhaps your web browser does things differently than my Firefox and Chrome instances, but when I copy out that pretty-printed URL from the address bar and paste it, I get the following string:
https://news.ycombinator.com/item?id=44826199%3Ehello%20there
Though -oddly-, while Chrome's pretty-printer does pretty-print %3E, it fails to pretty-print %20 . Go figure.Don't become unresponsive after one key to search for results. If the search impacts responsiveness, you need to have a hold-off time before kicking it off so that a longer prefix/infix can be gathered which will reduce the search space and improve its relevance.
Even the 10ms in TFA is too low. I personally wouldn't mind (or probably even notice) a delay of 100 ms.
Whatever delay you add before showing results doesn't get hidden by the display and user's reading latency, it adds to it.
Be that as it may, the performance side of it becomes irrelevant. The UI responds to the user's keystrokes instantly, and when they type what they had intended to type, the search suggestions are there.
Switch debouncing does not become irrelevant with unlimited computing power.
Doesn't really apply to a search box, where it's more of a delayed event if no event during a specific time window, only keeping last event.
At which point you are doing debouncing: distinguishing an intentional switch opening from the bounce that continued after you latched. You need some hold-off time or something.
Also, switches contacts bounce when opening!
A latch could be great for some kind of panic button which indicates a state change that continues to be asserted when the switch opens (and is reset in some other way).
RC circuits are more typical, you want to filter out high frequency pulses (indicative of bouncing) and only keep the settled/steady state signal. A latch would be too eager I think.
Come to think of it throttle is the much easier to understand analogy.
And battery, or at least enough air conditioning to cool down the desktop because of those extraneous operations, right?
Yes, it's not an exact comparison (hence analogy) – but it's not anything worth getting into a fight about.
You debounce a physical switch because it makes contact multiple times before settling in the contacted position, e.g. you might wait until it's settled before acting, or you act upon the first signal and ignore the ones that quickly follow.
And that closely resembles the goal and even implementation of UI debouncing.
It also makes sense in a search box because there you have the distinction between intermediate vs. settled state. Do you act on what might be intermediate states, or do you try to assume settled state?
Just because it might have a more varied or more abstract meaning in another industry doesn't mean it's a bad analogy, even though Javascript is involved, sheesh.
Yes, and returning 30,000 results matching the "a" they just typed is not going to do that. "Getting the desired result fastest" probably requires somewhere between 2 and 10 characters, context-dependent.
If it really only makes sense to perform the action once than disable/remove the button on the first click. If it makes sense to click the button multiple times then there should be no limit to how fast you can do that. It's really infuriating when crappy software drops user input because its too slow to process one input before the next. There is reason why input these days comes in events that are queued and we aren't still checking if the key is up or down in a loop.
There are async-safe variants but the typical lodash-style implementations are not. If you want the semantics of "return a promise when the function is actually invoked and resolve it when the underlying async function resolves", you'll have to carefully vet if the implementation actually does that.
For example, debouncing is often recommended for handlers of the resize event, but, in most cases, it is not needed for handlers of observations coming from ResizeObserver.
I think this is the case for other modern APIs as well. I know that, for example, you don’t need debouncing for the relatively new scrollend event (it does the debouncing on its own).
There are always countless edge cases that behave incorrectly - it might not be important and can be ignored, but while the general idea of debouncing sounds easy - and adding it to an rxjs observable is indeed straightforward...
Actually getting the desired behavior done via rxjs gets complicated super fast if you're required to be correct/spec compliant
The debouncing of rxjs just takes an observation and debounces, which is essentially throttle with inverted output (it outputs last instead of first).
That's almost never what the product owner actually wants, at least IME.
If they give you any kind of soec, you'll quickly realize that limit.
I.e. debouncing after the request happened is impossible, just like cleanly abortion requests on exit or similar.
There are also often a ton of signals you need to add to the observale for all the events the PO wants to respond to, such as opening dialogues, interacting with elements outside of the context of the debounces event chain etc pp
It just keeps getting more complicated with every additional thing they come up with. But if they're fine with just living with the technical limitations, all is fine.
https://github.com/lodash/lodash/blob/8a26eb42adb303f4adc7ef...
That said, this is a good resource on the original meaning: https://www.ganssle.com/debouncing.htm
meta: it seems like this account just submits misc links to mdn?
Human interaction with circuits, sensors, receptors, occur like that
When we click a keyboard key or switch circuit switch the receptors are very sensitive
we feel we did once but during that one press our fingers hands are vibrating multiple times hence the event get registered multiple times due to pulsating, hence all after first event, the second useful event that can be considered legitimate if the idle period between both matches desired debounce delay
in terms of web and software programming or network request handling
it is used as a term to debounce to push away someone something aggresive
Example wise
a gate and a queue Throttling -> gate get opened every 5 min and let one person in, no matter what
Debounce -> if the persons in queue are deliberately being aggressive thrashing at door to make it open we push them away Now instead of 5 min, we tell them you have to wait another 5 min since you are harassing, if before that they try again, we again tell them to wait another 5 min Thus debounce is to prevent aggresive behaviour
In terms of say client server request over network
We can throttle requests processed by server, let say server will only process requests that happen every 5 min like how apis have limit, during that before 5min no matter how many request made they will be ignored
But if client is aggressive like they keep clicking submit button, keep making 100s of requests that even after throttling server would suffer kind of ddos
so at client side we add debounce to button click event
so even if user keep clicking it being impatient, unnecessary network request will not be made to server unless user stop