Last year, PlasticList found plastic chemicals in 86% of tested foods—including 100% of baby foods they tested. Around the same time, the EU lowered its “safe” BPA limit by 20,000×, while the FDA still allows levels roughly 100× higher than Europe’s new standard.
That seemed solvable.
Laboratory.love lets you crowdfund independent lab testing of the specific products you actually buy. Think Consumer Reports × Kickstarter, but focused on detecting endocrine disruptors in your yogurt, your kid’s snacks, or whatever you’re curious about.
Find a product (or suggest one), contribute to its testing fund, and get full lab results when testing completes. If a product doesn’t reach its goal within 365 days, you’re automatically refunded. All results are published publicly.
We use the same ISO 17025-accredited methodology as PlasticList.org, testing three separate production lots per product and detecting down to parts-per-billion. The entire protocol is open.
Since last month’s “What are you working on?” post:
- 4 more products have been fully funded (now 10 total!)
- That’s 30 individual samples (we do triplicate testing on different batches) and 60 total chemical panels (two separate tests for each sample, BPA/BPS/BPF and phthalates)
- 6 results published, 4 in progress
The goal is simple: make supply chains transparent enough that cleaner ones win. When consumers have real data, markets shift.
Browse funded tests, propose your own, or just follow along: https://laboratory.love
Bit confused as to your position on funding.
I certainly want to avoid any weird incentive misalignment. Because the testing is done by an accredited lab that provides an official Certificate of Analysis, I think I could reasonably accept funding from manufacturers once I build out a feature that would will allow visitors to download the COA PDF.
No manufacturer has reached out directly yet, presumably because a) this project is small and still gaining trust and b) they would probably want to do testing privately and avoid publication of 'bad' results anyway.
Personally I wouldn't have problem with companies funding if it's via the normal route, i.e. via the web interface, such that they are just providing funds and then the normal testing/reporting takes place. But I fear this is idealistic. I can well imagine companies wanting to send you samples directly, wanting you to remove results and even threatening legal action if they don't like the results.
2. If you find regulation-violating (or otherwise serious) levels of undesirable chemicals, do you... (a) report it to FDA; (b) initiate a class-action lawsuit; (c) short the brand's stock and then news blitz; or (d) make a Web page with the test results for people to do with it what they will?
3. Is 3 tests enough? On the several product test results I clicked, there's often wide variation among the 3 samples. Or would the visualization/rating tell me that all 3 numbers are unacceptably bad, whether it's 635.8 or 6728.6?
4. If I know that plastic contamination is a widespread problem, can I secretly fund testing of my competitors' products, to generate bad press for them?
5. Could this project be shut down by a lawsuit? Could the labs be?
1. I'm still working to make results more digestible and actionable. This will include the %TDI toggle (total daily intake, for child vs adult and USA vs EU) as seen on PlasticList, but I'm also tinkering with an even more consumer-friendly 'chemical report card'. The final results page would have both the card and the detailed table of results.
2. I have not found any regulation-violating levels yet, so in some sense, I'll cross that bridge when I get there. Part of the issue here is that many believe the FDA levels are far too relaxed which is part of why demand for a service like laboratory.love exists.
3. This is part of the challenge that PlasticList faced, and additionally a lot of my thinking around the chemical report card are related to this. Some folks think a single test would be sufficient to catch major red flags. I think triplicate testing is a reasonable balance of statistically robust while not being completely cost-prohibitive.
4. Yes, I suppose one could do that, as long as the funded products can be acquired by laboratory.love anonymously through their normal consumer supply chains. Laboratory.love merely acquires three separate batches of a given product from different sources, tests them at an ISO/IEC 17025-accredited lab, and publishes the data.
5. I suppose any project can be shut down by a lawsuit, but laboratory.love is not currently breaking any laws as far as I'm aware.
Great site!
For example, there are two individuals who own the same $100k machine for testing the performance of loudspeakers.
https://www.audiosciencereview.com/forum/index.php
https://www.erinsaudiocorner.com/
Both of them do measurements and YouTube videos. Neither one has a particularly good index of their completed reviews, let alone tools to compare the data.
I wish I could subscribe to support a domain like “loud speaker spin tests” and then have my donation paid out to these reviewers based on them publishing new high quality reviews with good data that is published to a common store.
Here is something I'm struggling with as a user. I look at a product (this tofu for example [0]) and see the amounts. And then I have absolutely no clue what it means. Is it bad? How bad? I see nanograms one place and μg in an info menu - is μg a nanogram? And what is LOQ? Virtually 0? Simply less than the recommended amount?
I think 99% of people will have the same reaction. They will have no idea what the information means.
I clicked on some info icons to try and get more context. The context is good (explains what the different categories are) but it still didnt help me understand the amounts. I went to "About" and it didnt help with this. I went to the FAQ and and the closest I can find is:
>What makes a result 'concerning'? We don't make safety judgments. Instead, we compare results to established regulatory limits from FDA, EPA, and EFSA, noting when products exceed these thresholds. We also flag when regulatory limits themselves may be outdated based on new research.
I understand that you don't want to make the judgement and it's about transparency and getting the information. But the information is worthless if people dont know what it meant.
I want to see the results of the test compared to the EU/US/Whoever recommendations. I want explanations of what the different chemicals are and preferably linked to peer reviewed studies explaining side effects.
Once even more tests are ran I want comparisons between product brands.
Overall still great but very much an engineer presentation to complex data. Not that its a bad thing, being transparent with data is important, but we aren't all experts.
I'm working to make results more digestible and actionable. This will include the %TDI toggle (total daily intake, for child vs adult and USA vs EU) as seen on PlasticList, but I'm also tinkering with an even more consumer-friendly 'chemical report card'. The final results page would have both the card and the detailed table of results.
I'm working to make results more digestible and actionable. This will include the %TDI toggle (total daily intake, for child vs adult and USA vs EU) as seen on PlasticList, but I'm also tinkering with an even more consumer-friendly 'chemical report card'. The final results page would have both the card and the detailed table of results.
What bugs me is that plastics manufacturers advertise "BPA-free", which is technically correct, but then add a very similar chemical from the same family that has the same effect on the plastic - which is good - but also has the same effect on your endocrine system
Some testing has been done on https://labdoor.com/ where they basically fund the testing with affiliate links, which I think could be another revenue source for your site. I did contact them in January and they said they would add the brands I requested to the list, it's just not crowdsourced the same way your site is. They received some form of backing from Mark Cuban [4].
(edit) To make this more clear - If you are looking for expansion or making it a little wider, allowing users to request other types of testing besides the plastics would be cool.
[1] - https://www.texashealth.org/areyouawellbeing/Eating-Right/Le... [2] - https://www.erc501c3.org/settlements/6f2zxji0o3m2k4jhcwgg7hd... [3] - https://www.erc501c3.org/settlements/k7p29rie5whpc5qek5kdha2... [4] - https://markcubancompanies.com/companies/labdoor/
I have considered creating specific ad-hoc campaign pages for folks with special interests and let them drive people to the page (like a more focused Kickstarter or GoFundMe with a lab on the back end).
If anyone has a special interest I'm happy to mock up what funding targets would be.
Here is a Stripe link: https://donate.stripe.com/9B614o4NWdhN83l9r06c001
I'll add subscriptions as a more formal option on laboratory.love soon!
Disclaimer: I don't think I can have a 365-day refund with a recurring donations like this. The financial infrastructure would add too much complexity.
Donated funds are pooled and allocated to the leading unfunded product anytime the pool can get that product to its funding goal.
I hope we can agree that we are better off than that now.
What I'm curious about is whether you think it's been a steady stream of improvements, and we just need to improve further? Or if you think there was some point between 1900 and now where food health and safety was maximized, greater than either 1900 or now, and we've regressed since then?
Or put another way: it was a simple question that the ggp can answer or not as they choose. I was just curious for their perspective.
My instinct is that things have largely gotten better over time. At a super-macro level, in 1900 we had directly adulterated food that e.g. the soldiers receiving Chicago meat called "embalmed". In the mid-20th century we had waterways that caught fire and leaded gas.
By the late 20th we had clean(er) air (this is all from a U.S. perspective) and largely safe food. I think if we were to claim a regression, the high point would have to be around 2000, but I can't point to anything specific going on now that wasn't also going on then -- e.g. I think microplastics were a thing then as well, we just weren't paying attention.
At a minimum it needs glyphosate testing. I suspect the avocado oil has no plastics but high glyphosate, it's one of the many reasons I only use high-quality olive oil and coconut oil in cooking.
It's interesting that a bunch of the funded products have been funded by a single person.
Do you know if it's the producers themselves? Worried rich people?
I've yet to have any product funded by a manufacturer. I'm open to this, but I would only publish data for products that were acquired through normal consumer supply chains anonymously.
It would be so boring - no funding accepted, everything would be freely available, no political initiatives, no recommendations, nothing. Just a treasure trove of data
A man can dream - kudos to you for actually making it a reality - great inspiration
Question for you: in general, how much does this stuff cost? what if you wanted to expand to testing beyond plastic, e.g., verifying the potency of ingredients in supplements, verifying cleaning product ingredients, etc., is that possible?
My partner lab specifically offers 300+ unique tests across the following categories: Contaminants, Elemental, Allergen, Preservatives and Additives, Microbial, and Phytochemical, Vitamin, and Actives"
So yes, expansion is definitely possible! If a billionaire wanted to fund a project like this it could certainly convert dollars into data, it's just a matter of choosing which products and contaminants to start chewing through first to ensure the data creates positive change in the real world.
https://github.com/scallyw4g/bonsai
I also wrote a metaprogramming language which generates a lot of the editor UI for the engine. It's a bespoke C parser that supports a small subset of C++, which is exposed to the user through a 'scripting-like' language you embed directly in your source files. I wrote it as a replacement for C++ templates and in my completely unbiased opinion it is WAY better.
For example, 1 PCR reaction (a common reaction used to amplify DNA) costs about $1 each, and we're doing tons every day. Since it is $1, nobody really tries to do anything about it - even if you do 20 PCRs in one day, eh it's not that expensive vs everything else you're doing in lab. But that calculus changes once you start scaling up with robots, and that's where I want to be.
Approximately $30 of culture media can produce >10,000,000 reactions worth of PCR enzyme, but you need the right strain and the right equipment. So, I'm producing the strain and I have the equipment! I'm working on automating the QC (usually very expensive if done by hand) and lyophilizing for super simple logistics.
My idea is that every day you can just put a tube on your robot and it can do however many PCR reactions you need that day, and when the next day, you just throw it out! Bring the price from $1 each to $0.01 + greatly simplify logistics!
Of course, you can't really make that much money off of this... but will still be fun and impactful :)
Some things that would be cool
- Along your lines: In general, cheap automated setups for PCR and gels
- Cheap/automatic quantifiable gels. E.g. without needing a kV supply capillary, expensive QPCR machines etc.
- Cheaper enzymes in general
- More options for -80 freezers
- Cheaper/more automated DNA quantification. I got a v1 Quibit which gets the job done, but new ones are very expensive, and reagent costs add up.
- Cheaper shaking incubator options. You can get cheap shakers and baters, but not cheap combined ones... which you need for pretty much everything. Placing one in the other can work, but is sub-optimal due to size and power-cord considerations.
- More centrifuges that can do 10kG... this is the minimum for many protocols.
- Ability to buy pure ethanol without outrageous prices or hazardous shipping fees.
- Not sure if this is feasible but... reasonable cost machines to synthesize oglios?1. You can purchase gel boxes that do 48 to 96 lanes at once. I'd ideally have it on a robot whose only purpose is to load and run these once or twice a day. All the samples coming through get batched together and run
2. Bioanalyzer seems nice for quantification of like PCRs to make sure you're getting the right size. But if I'll be honest I haven't though that much about it. But qPCRs actually become very cheap, if you can keep the machines full. You can also use something like a nanodrop and it is much much cheaper
3. Pichia pastoris expression ^
4. You can use a plate reader (another thing that goes bulk nicely), but the reagents you can't really get around (but cheaper in bulk from China)
5. If you aggregate, these become really cheap. The complicated bits are getting the proper cytomat parts for shaking, as they are limited on the used market
6. These can't be automated well, so I honestly haven't thought too much about it.
7. Reagents cheaper in bulk China
8. ehhhh, maybe? But not really. But if you think about a scaled centralized system, you can get away with not using oligos for a lot of things
Anyhow good luck. Would love to follow if you do anything with this in the future. Do you have a blog or anything?
Me being naive, I thought “how hard could would it actually be to build a free e-sign tool?”
Turns out not that hard.
In about a weekend, I built a UETA and ESIGN compliant tool. And it was free. And it cost me less than $50. Unlimited free e-sign. https://useinkless.com/
DocuSign customers buy trust.
Free e-signatures are a great idea, have you considered getting a foundation to back the project and maybe taking out some indemnity insurance, perhaps raising a dispute fund?
its a well recognised tool for contract agreements, and you pay the money so that you are indemnified for any oopsies that might happen in transit.
A project to implement 1000 algorithms. I have finished around 400 so far and I am now focusing on adding test cases, writing implementations in Python and C, and creating formal proofs in Lean.
It has been a fun way to dive deeper into how algorithms work and to see the differences between practical coding and formal reasoning. The long-term goal is to make it a solid reference and learning resource that covers correctness, performance, and theory in one place.
The project is still in its draft phase and will be heavily edited over the next few months and years as it grows and improves.
If anyone has thoughts on how to structure the proofs or improve the testing setup, I would love to hear ideas or feedback.
I feel like the presentation of Lomuto's algorithm on p.110 would be improved by moving the i++ after the swap and making the corresponding adjustments to the accesses to i outside the loop. Also mentioning that it's Lomuto's algorithm.
These comments are probably too broad in scope to be useful this late in the project, so consider them a note to myself. C as the language for presenting the algorithms has the advantage of wide availability, not sweeping performance-relevant issues like GC under the rug, and stability, but it ends up making the implementations overly monomorphic. And some data visualizations as in Sedgewick's book would also be helpful.
That is also why in TAOCP, Don Knuth used MIX (and later MMIX) to keep the book language agnostic and timeless. Before Knuth's birthday next year, I am hoping to finish most of the exercises in TAOCP.
I've been meaning to get back into it and just ask ChatGPT to straighten me out on a few things, e.g. "am I barking up the wrong tree or is this what he wants, but don't spoil me."
That said, I personally prefer Introduction to Algorithms (CLRS) for its formal rigor and clear proofs, and Grokking Algorithms for building intuition.
The broader goal of this project is to build a well tested, reference quality set of implementations in C, Python, and Go. That is the next milestone.
Your comment brought back an old memory for me. My first programming language in high school was Turbo Pascal. That IDE was amazing: instant compilation, the blue screen TUI, F1 for inline help, a surprisingly good debugger, and it just felt so smooth and fast back then. No internet needed, no AI assistance, just pure focus and curiosity. Oh, how I really miss those days :)
The first time I saw TP was on my uncle's Kaypro, which was sort of even more amazing: the screen wasn't capable of blue, the keyboard didn't have an F1 key, and the CPU wasn't capable of instant compilation. But the alternatives were assembly language and Microsoft BASIC!
Feels like what x86 could have been if it started clean.
I've only written some small and simple RISC-V code; enough to know that I like it a lot better than AMD64 but not as well as ARM32.
However, you are right, Prof. Sedgewick has long maintained versions of his material across multiple languages. I remember that the third edition has C, C++ and Java versions.
Sedgewick's C edition is the book that first showed me that programs can be beautiful. His favorite algorithm is of course Quicksort. His version of the partition function from Algorithms in C (01998 edition) is, I think, worth studying:
int partition(Item a[], int l, int r)
{ int i = l-1, j = r; Item v = a[r];
for (;;)
{
while (less(a[++i], v)) ;
while (less(v, a[--j])) if (j == l) break;
if (i >= j) break;
exch(a[i], a[j]);
}
exch(a[i], a[r]);
return i;
}
There are some things to dislike in this presentation, like naming a variable "l", the rather excessively horizontal layout, and the brace style. And of course he picked Hoare's original branchy partition algorithm rather than Lomuto's simpler one. But I think there are some other choices that are beneficial:• He's using short variable names to let the algorithm speak for itself.
• He uses a type Item (perhaps a typedef) to make the distinction clear between the type of array elements and the type of array indices.
• Similarly, he's using exch and less functions to abstract away the comparison and exchange operations. In the Golang standard library implementation, these are actually methods on the container to be sorted taking indices as arguments, so the concrete item type doesn't appear in the algorithm at all, which I think is an important insight. Of course the numerous
It would be possible to use the C preprocessor to compile specialized versions of the function for particular data types and comparators in a type-safe way by redefining Item, exch, and less, but of course when you do that the error messages are difficult to decipher. (But exch can't be just a regular function, because C passes arguments by value.)
Sedgewick's original presentation in the book (from the 01983 edition) inlines the partitioning into the Quicksort loop:
procedure quicksort(l, r: integer)
var v, t, i, j: integer;
begin
if r > l then
begin
v := a[r]; i := l-1; j := r;
repeat
repeat i := i+1 until a[i] >= v;
repeat j := j-1 until a[j] <= v;
t := a[i]; a[i] := a[j]; a[r] := t;
until j <= i;
a[j] := a[i]; a[i] := a[r]; a[r] := t;
quicksort(l, i-l);
quicksort(i+1, r)
end
end;
From my perspective the C edition is an enormous improvement. I don't have a copy of the (Pascal) second edition to compare.The third C++ edition (01998) looks almost identical to the C version, except that it's templatized on the Item type, which in C++ also permits you to use static ad-hoc polymorphism (funciton overloading) for the exch function and the comparison operator:
template <class Item>
int partition(Item a[], int l, int r)
{ int i = l-1, j = r; Item v = a[r];
for (;;)
{
while (a[++i] < v) ;
while (v < a[--j]) if (j == l) break;
if (i >= j) break;
exch(a[i], a[j]);
}
exch(a[i], a[r]);
return i;
}
He illustrates the dynamic behavior of sorting algorithms in multiple ways. One is a vertical time sequence of horizontal arrays of letters, circling the pointed-to elements and graying out the unchanged ones. Another, which can scale to larger arrays, is showing a small-multiple time sequence of scatterplots, like a comic strip, where the X coordinate is the array index and the Y coordinate is the element value at that array index. (A variant of this draws bars whose heights are the element values.) From my perspective, static visualizations like this should be preferred over animation when applicable (https://worrydream.com/MagicInk/), but this case is maybe on the borderline where both animation and static visualization have unique merits.My own terse presentation of Lomuto's algorithm is http://canonical.org/~kragen/sw/dev3/paperalgo#addtoc_21. I don't think it reaches the clarity of an exposition in C or Python, but it is much easier to write on a whiteboard. I do think that expressing the partition subroutine in terms of an operation on a Golang-like slice improves the clarity significantly. In C or C++ you could pass a base pointer and a length; perhaps Sedgewick didn't want to expose students to this aspect of C arrays, or perhaps he was still constrained by Pascal thinking.
type ByValue []int
func (a ByValue) Len() int { return len(a) } func (a ByValue) Swap(i, j int) { a[i], a[j] = a[j], a[i] } func (a ByValue) Less(i, j int) bool { return a[i] < a[j] }
sort.Sort(ByValue(a))
It worked fine but felt a bit awkward for small programs. After Go 1.18, you can do it directly with generics:
s := []int{3, 1, 4, 1, 5, 9} slices.Sort(s)
Much simpler and expressive, yet still type safe. I really like how Go kept it minimal without adding too many abstractions.
Compare how it's done in ML, which has had generics for almost 50 years: https://ocaml.org/cookbook/sorting-lists-and-arrays/stdlib https://ocaml.org/manual/5.4/api/List.html#1_Sorting
Golang, like Python and JS, is garbage-collected; in some cases this is helpful for focusing attention on the central semantics of the algorithms, but it does have the disadvantage that it makes reasoning about space behavior and runtime performance much more difficult. One of Quicksort's chief benefits, for example, is that it's in-place, which the Python version in your "cheat sheet" isn't.
I don't have any feedback, but rather a question, as I've seen many repositories with people sharing their algorithms, at least on GitHub for many different languages (e.g. https://github.com/TheAlgorithms), what did you find that was missing from those repositories that you wanted to write a book and implement hundreds of algorithms, what did you find that was lacking?
No organization for learners either. It jumps straight into implementations without a logical flow from fundamentals. I want to build something more structured: start from the very foundation (like data structures, recursion, and complexity analysis), then move to classical algorithms (search, sort, graph, dynamic programming), and eventually extend to database internals, optimization, and even machine learning or AI algorithms. Basically, a single consistent roadmap from beginner to researcher level, where every algorithm connects to the next and builds intuition step by step.
Another very good resource for beginners is https://www.hello-algo.com. At first, i actually wanted to contribute there, since it explains algorithms visually and in simple language. But it mostly covers the basics and stops before more advanced or applied topics. I want to go deeper and treat algorithms as both code and theory, with mathematical rigor and formal proofs where possible. That is something I really liked about Introduction to Algorithms (CLRS) and of course The Art of Computer Programming (TAOCP) by Knuth. They combine reasoning, math, and practice. My goal is to make something in that spirit, but more practical and modern, bridging the gap between academic books and messy open source repos.
I want to change that view and show that algorithms are beautiful and useful beyond interviews. They appear everywhere, from compilers to databases to the Linux kernel, where I found many interesting data structures worth exploring. (i will share more about this topic later)
I hope to share more of these insights and connect with others who enjoy discussing real world algorithm design, which is what I love most about the Hacker News community (except for the occasional trolls that show up from time to time).
The VM and transpiler were originally implemented by hand, and later I used Codex to help polish the code. The generated output works, though it is a bit messy in places. Hopefully, after finishing a few books, I can return to the project with more experience and add better use cases for it.
I usually feel to many people wildly through around terms they hardly understand, in the belief they cannot possibly understand. That’s so wrong, every executive should understand some of what determines button line. It’s not like people skip economics because it’s hard.
Would love to perhaps contribute sometime next year. Stared and until then good luck - perhaps add a donation link!
I really like your idea of targeting executives and connecting it to real business outcomes. Getting decision makers to truly understand the fundamentals behind the technology would make a huge difference.
https://github.com/olooney/jellyjoin
It hits a sweet spot by being easier to use than record linkage[3][4] while still giving really good matches, so I think there's something there that might gain traction.
[1]: https://platform.openai.com/docs/guides/embeddings
[2]: https://en.wikipedia.org/wiki/Hungarian_algorithm
I see you saved a spot to show how to use it with an alternative embedding model. It would be nice to be able to use the library without an OpenAI api key. Might even make sense to vendor a basic open source model in your package so it can work out of the box without remote dependencies.
If you're adding more LLM integration, a cool feature might be sending the results of allow_many="left" off to an LLM completions API that supports structured outputs. Eg imagine N_left=1e5 and N_right=1e5 but they are different datasets. You could use jellyjoin to identify the top ~5 candidates in right for each left, reducing candidate matches from 1e10 to 5e5. Then you ship the 5e5 off to an LLM for final scoring/matching.
last month’s “what are you working on” thread impulsed me to upload this game to itch and 1 month later, i’ve got a small community, lots of feedback and iterations. It brought a whole new life to a project that was on the verge of abandoning.
So, I’m really grateful for this thread. https://explodi.itch.io/microlandia
https://microlandia.tubatuba.net/simulation_details
Quite interesting details.
I wonder if you simulate at individual level or group? Would be cool at individual level each one making decisions individually and see some emerging behavior.
Also how corruption emerges in gov etc
Also if no job maybe they could try uber/food delivery crappy jobs like that or start their own business.
Maybe also less money less likely to have kids? Would be nice to show how poverty helps or not population growth. If too poor might have no education and would make kids, if average citizen and can’t save money will avoid kids. That’s why at individual level simulation could find these emerging patterns. But probably too expensive computationally ?
If you are referring to the citizens, yes, at individual level. However for traffic I'm using a sampling rate.
> Also if no job maybe they could try uber/food delivery crappy jobs like that or start their own business.
That's an awesome idea, I added it to my backlog :)
> less money less likely to have kids?
This is mega tricky, because it happens very differently across the world. Yes, can be expensive computationally that's why the city is so small (for now) but as I start to distribute the simulation into many cores, players with high core CPU will be able to choose a bigger city size :) I agree that individual level simulation is what makes it interesting and I plan to keep it like that.
I heard that the SimCity devs have had to fudge that out for gameplay's sake ever since the oldest versions
This weekend I have plans to start playing a lot Subway Builder (https://www.subwaybuilder.com) which I'm really excited about, and maybe get some books on the subject, in order to get it right
Parking space simulation is coming soon. I feel I will completely miss the point if I leave that out. The idea is to have street parking (with configurable profit for the city) parking lots, and buildings with underground parking, that should conflict, of course, with metro lines.
https://www.inclusivecolors.com/
- You can precisely tweak every shade/tint so you can incorporate your own brand colors. No AI or auto generation!
- It helps you build palettes that have simple to follow color contrast guarantees by design e.g. all grade 600 colors have 4.5:1 WCAG contrast (for body text) against all grade 50 colors, such as red-600 vs gray-50, or green-600 vs gray-50.
- There's export options for plain CSS, Tailwind, Figma, and Adobe.
- It uses HSLuv for the color picker, which makes it easier to explore accessible color combinations because only the lightness slider impacts the WCAG contrast. A lot of design tools still use HSL, where the WCAG contrast goes everywhere when you change any slider which makes finding contrasting colors much harder.
- Check out the included example open source palettes and what their hue, saturation and lightness curves look like to get some hints on designing your own palettes.
It's probably more for advanced users right now but I'm hoping to simplify it and add more handholding later.
Really open to any feedback, feature requests, and discussing challenges people have with creating accessible designs. :)
There's so much more to do with tools like this, and I'm really glad to see it.
- Drag the hue and saturation curves to customise the tints/shades of a color. Look at the UI mockup as you do this to make sure the tints/shades look good together.
- The color pairings used in the UI mockup all initially pass WCAG contrast checks but this can break if you tweak the lightness curve of a color. The mockup will show warning outlines if this happens. Click on a warning and it'll tell you which color pairs need to have their lightness values moved further apart to fix it.
- Once you're happy, use the export menu to use your colors in your CSS or Figma designs. You can use the mockup as a guide for which color pairs are accessible for body text, headings, button outlines and so on.
Does that make more sense? You really need to be on desktop as well because the mobile UI is more of a demo.
COLOR='#000000' # Okabe-Ito: 1 black
COLOR='#e69f00' # Okabe-Ito: 2 orange
COLOR='#56b4e9' # Okabe-Ito: 3 skyblue
COLOR='#009e73' # Okabe-Ito: 4 bluish-green
COLOR='#f0e442' # Okabe-Ito: 5 yellow
COLOR='#0072b2' # Okabe-Ito: 6 blue/darkerblue
COLOR='#d55e00' # Okabe-Ito: 7 vermilion/red
COLOR='#cc79a7' # Okabe-Ito: 8 reddish-purple
https://www.inclusivecolors.com/?style_dictionary=eyJjb2xvci...
I've sorted the colors by luminance/lightness and added a gray swatch for comparison so can explore which color pairs pass WCAG contrast checks.
I haven't really gotten into colorblind safe colors like this yet where the colors mostly differ by hue and not luminance. Colorblind and non-colorblind people should be able to tell colors apart based on luminance difference i.e. luminance contrast. Hue perception is impacted by the several different kinds of color blindness so it's much trickier to find a set of colors that everyone can tell apart. This relates to the WCAG recommendation you don't rely on hue (contrast) to convey essential information (https://www.w3.org/WAI/WCAG21/Understanding/use-of-color.htm...).
The gray swatch above could be called colorblind safe for example because as long as you pick color pairs with enough luminance contrast between them, colorblind and non-colorblind people should be able to tell them apart. You could even vary the hue and saturation of each shade to make it really colorful, as a long as you don't change the luminance values the WCAG contrast between pairings should still pass.
It supports multiple LLM providers: OpenAI, Anthropic, xAI, DeepSeek, Gemini, OpenRouter, Z.AI, Moonshot AI, all with automatic failover, prompt caching, and token-efficient context management. Configuration occurs entirely through vtcode.toml, sourcing constants from vtcode-core/src/config/constants.rs and model IDs from docs/models.json to ensure reproducibility and avoid hardcoding. [0], [1], [2]
Recently I've added Agent Client Protocol (ACP) integration. VT Code is now a fully compatible ACP agent, works with any ACP-clients: Zed (first-class support), Neovim, marimo notebook. [3]
[0] https://github.com/vinhnx/vtcode
[1] https://crates.io/crates/vtcode
[3] https://agentclientprotocol.com/overview/agents
Thank you!
I love that you've made it open source and that it's in Rust, thanks a lot for the work!
I choose Rust since I have some familiarity and experience with it, VT Code is of course, AI-assisted, I mainly use Codex to help me build it. Thank you again for checking it out, have a great day! : )
I’m curious though, how significant do you think it is for the agent to have semantic access through Tree-sitter?
Also what model have you had the most success with ?
> I’m curious though, how significant do you think it is for the agent to have semantic access through Tree-sitter?
For this, I'm really not sure, but since the start of building VT Code. I just had this idea to use tree-sitter to assist the agent to have more (or faster/more precise) semantic understanding of the coding, instead of relying them to figure out themself. For me, naively I think this could help agent to have better language-specific and accurately decision about the workspace (context) that they are working. If not having tree-sitter, I think the agent could eventually figure out itself. For this aspect, I should be research more on this topic. In VT Code, I included 6 language: Go, Python, Rust, TypeScript, Swift... ) via rust-binding crates, mostly when you launch the vtcode agent on any workspace, It will show the main languages in the workspace right way.
> Also what model have you had the most success with ?
I'm having mainly limited-budget so I can only use OpenRouter and utilize its vast amount models support. So that I can prototype quickly, for different use-cases. For VT Code agent, I'm using mainly x-ai/grok-code-fast-1, in my experience, it most suit for building VT Code agent it self because of speeds, and versatile in function calling and have good instruction following. I also have good successes with x-ai/grok-4-fast. I have not tried claude-4.5-sonnet and gpt-5/gpt-5-codex though. I really love to run benchmarks for VT Code to see how it perform in real world coding task, I'm aiming for Aider polygot bench, terminal-bench and swe-bench-lite, it is in my plan for now in my GitHub issues.
For VT Code itself, I instruct it to strictly follow system-prompt, in which I take various inspiration from Anthropic, OpenAI and Devin guide/blogs on how to build coding agent. But, for a model-agnostic agent, the capability to support multi providers and multi models is a challenge. For this I think I need help. I'm fortunately to have support from open-source community suggesting me to use zig, I have had good success with it so far, for implement LLM calls and implement the /model picker.
Overall in my experience building VT, the most important aspect of effective coding agent is context engineering, like all big-lab has research. A good system prompt is also very important, but not context is everything. https://github.com/vinhnx/vtcode/blob/main/prompts/system.md
// Sorry, English is not my main language, so pardon the typo and grammar. Thank you!
We were featured on our local NPR syndicate which is neat: https://laist.com/news/los-angeles-activities/new-grassroots...
Since this is hackernews, i'll add that i'm building the website and archiving system using haskell and htmx, but what is currently live is a temp static html site. https://github.com/solomon-b/kpbj.fm
On the off chance you are throwing another event, I would love to help you raise much more than $800 one time (my site is https://withfriends.events/)
This might be a naive question which you've probably been asked plenty of times before so I'm sorry of I'm being tedious here.
Is it really worth the effort and expense to have a real radio station these days? Wouldn't an online stream be just as effective if it was promoted well locally?
A few years ago a friend who was very much involved in a local community group which I was also somewhat interested in asked me if I wanted to help build a low power FM station. He asked me because I know something about radio since I was into ham radio etc.
I was skeptical that it was worth the effort. The nerdy part of me would have enjoyed doing it but I couldn't help thinking that an online stream would probably reach as many people without the hassle and expensive of a transmitter, antenna etc.
I know it's a toss up. Every car has an FM radio. Not everyone is going to have a phone plugged in to Android Auto or Apple Car Play and have a good data plan and have a solid connection.
I also pointed out that the technical effort is probably the small part compared to producing interesting content.
1. Radio is COOL. As a fellow ham I think you would agree with me on this one so I'll leave it at that.
2. Internet streaming gives you wider but far less localized audience. We will have an internet stream, but being radio first shifts the focus to local community and local content.
3. Internet streaming and radio have related but not entirely overlapping histories and contexts which impacts how people produce and consume their content. I love the traditional formats of radio and they are often completely missing in online radio which IMO models itself more often on mixtape and club DJ culture.
4. AI slop is ruining the world. I have this belief that as AI slop further conquers the internet we are going to get to a place where nobody trusts internet content. People will seek out novelty and authenticity (sort of how LLMs do lol) and I think there will be a return to local content and community.
5. Commercial radio sucks. The LPFM system is a wonderful opportunity to create a strong, community driven alternative to corporate media.
It's an all-in-one toolkit designed to automate the boring stuff so you can focus on flying. Core features include: automatic flight tracking that turns into a digital logbook entry, a full suite of E6B/conversion calculators, customizable checklists, and live weather decoding.
It’s definitely not a ForeFlight killer, but it's a passion project I'm hoping can be useful for other student and private pilots.
App Store: https://apps.apple.com/app/pilot-kit/id6749793975 Google Play: https://play.google.com/store/apps/details?id=club.air.pilot...
Any feedback is welcome!
That’s built on a dataset and paper I wrote called CommonForms, where I scraped CommonCrawl for hundreds of thousands of fillable form pages and used that as a training set:
https://arxiv.org/abs/2509.16506
Next step is training and releasing some DETRs, which I think will drive quality even higher. But the ultimate end goal is working on automatic form accessibility.
Other than that, I've been doing a lot of fixing of tech debt in my home network from the last six years. I've admittedly kind of half-assed a lot of the work with my home router and my server and my NAS and I want these things to be done correctly. (In fairness to me, I didn't know what I was doing back when I started, and I'd like to think I know a fair bit better now).
For example, when I first built my server, I didn't know about ZFS datasets, so everything was on the main /tank mount. This works but there are advantages to having different settings for different parts of the RAID and as such I've been dividing stuff into datasets (which has the added advantage of "defragging" because this RAID has grown by several orders of magnitude and as a result some of the initial files were fragmented).
Just kidding. I just started last Saturday, so I can’t speak much to it. Thus far it has been primarily assigned reading and practice courses. There is an assignment due in November that I need to start which I haven’t yet.
I am taking analytical number theory and fractal geometry.
It’s not really self paced; you can go a little ahead but you can’t speed through it like WGU.
Rough idea is easy to use voice mode to record data, then analyze unstructured data with AI later on.
I want to track all relevant life information, so what I'm eating, meds I'm taking, headache/nausea levels, etc.
Adding records is as easy as pressing record on my apple watch and speaking some kind of information. Uses Deepgram for voice transcription since it's the best transcription API I've found.
Will then send all information through to a LLM for analysis. It has a "chat with your data" page to ask questions and try and draw conclusions.
Main webapp is done, now working on packaging it into an iOS app so I can pull biometrics from Healthkit. Will then look into releasing it, either on github or possibly in the app store. It's admittedly mostly vibe coded, so not sure if it'll be something releasable, but we'll see...
Let me know if this would interest anyone!
I can suggest the research papers by Markus Dahlem for some in depth modern takes on migraine.
E.g. meditation, yoga, ...
Blood sugar. Turned out I was having hypos. I’ve found now that just a spoonful of honey when I feel them coming on normally reduces my migraines to headaches, which I can manage myself with paracetamol
Sorry - not directly related to your post but still perhaps useful
Try carnivore. No carbs, all animal products. It's reversed so many of my health problems, from pre-diabetes, skin fungus flareups, and mental fog. Over a million people are now carnivore, and the evidence in support in growing quickly.
Also, a plug for Oliver Sacks's Migraine which taught me a lot about migraine with aura.
Note that even the anticipation of meeting people can be a mental load.
Each issue includes: - Featured talks of the week highlighting the must-watch talks of the week. It includes a short human-written video summary. - New talks including the complete list of all the talks that have been uploaded in the past week. These are grouped by conference and ordered by view count.
From time to time, I build talk compilations, for example: - https://www.techtalksweekly.io/p/50-most-watched-software-en... - https://www.techtalksweekly.io/p/100-most-watched-software-e..., which made it to the HN front page.
Its built on top of Kubernetes, based on learnings I've had from previous experiences scaling infrastructure.
If you look at the markup PaaS (Heroku, Fly, Render) applies to IaaS (AWS, Hetzner), it's on the order of 5-10x. But not having that, and trying to stitch together random AWS services is a huge PITA for a medium sized engineering team (we've tried).
On top of all that, theres a whole host of benefits to being on kubernetes, namely, that you can install any helm package with one click, which Canine also manages.
A good example is Sentry -- even though it has an open source offering, almost everyone pays for the cloud version because its too scary to self host. With Canine, its a one click, and you get a sentry.your-domain.com to use for whatever you need.
Recently got a sponsorship from the Portainer team to allow me to dedicate way more time to this project, so hugely grateful to them for that.
I'd like to think at this point (about 2 years into development) we've gotten to a place where the end user doesn't even know they are using Kubernetes.
It makes tricky functions like torch.gather and torch.scatter more intuitive by showing element-level relationships between inputs and outputs.
For any function, you can click elements in the result to see where they came from, or elements in the inputs to see how they contribute to the result to see exactly how it contributes to the result. I found that visually tracing tensor operations clarifies indexing, slicing, and broadcasting in ways reading that the docs can't.
You can also jump straight to WhyTorch from the PyTorch docs pages by modifying the base URL directly.
I launched a week or two back and now have the top post of all time on r/pytorch, which has been pretty fun.
torch.matmul was one of the first functions I implemented on WhyTorch and it uses and highlights rows and columns as you would expect.
I’d love to hear any feedback or outcomes from your training session, please feel free to reach out - email in profile.
I'm working on a mini-project which monitors official resources on the web and sends email notifications on time. Currently covering around 15000 inhabitants.
The goal is to serve the laws in a format that is easy to cite, monitor, or machine-read. It should also have predictable URLs that can be inferred from the law’s name. It will also have side by side AI translations (marked as such).
I cite a lot of laws in my content and I want to automatically flag content for review when a specific paragraph of the law changes. I also want to automatically update my tax calculator when the values change.
Basically, a refresh of gestetze-im-internet.de and buzer.de.
Dunno if other governments are this Byzantine in practice (our system seems to be like... manual integration of diff patches) but it's pretty interesting and I really appreciate the work that goes into these types of things.
Where I'm from, citizens _need_ more awareness of their rights today and in the future.
It will be on either of these accounts:
I started this out of frustration that there is no good tool I could use to share photos from my travel and of my kids with friends and family. I wanted to have a beautiful web gallery that works on all devices, where I can add rich descriptions and that I could share with a simple link.
Turned out more people wanted this (got 200+ GitHub stars for the V1) so I recently released the V2 and I'm working on it with another dev. Down the road we plan a SaaS offer for people that don't want to fiddle with the CLI and self-host the gallery.
I also tried the vertical masonry layout, which looks good, but makes no sense if your photos have a chronological order...
The magic happens here: https://github.com/SimplePhotoGallery/core/blob/a3564e30bcb6...
I stumbled across it looking for CSS flex masonry examples.
- No sign-up, works entirely in-browser
- Live PDF preview + instant download
- VAT EU support
- Shareable invoice links
- Multi-language (10+) & multi-currency
- Multiple templates (incl. Stripe-style)
- Mobile-friendly
GitHub: https://github.com/VladSez/easy-invoice-pdf
Would love feedback, contributions, or ideas for other templates/features.
https://github.com/VladSez/easy-invoice-pdf/blob/main/LICENS...
I have been working on it for the last two years as a side project, but starting March will be my full time job! Kind of excited and scared at the same time
Could you please provide a Docker image?
Many thanks!
How do you switch from open source (1) to full time paid job (2) ? I'm curious cause I'm still stuck in (1)
Thank for your feedback
[1]: https://nlnet.nl/project/IronCalc/ [2]: https://github.com/suitenumerique/calc
The idea is for the game to make logical sense, but make the player sound completely unhinged from reality "I need to put the toaster on top of the oven to make the lamp spin around, that way I can move the lamp across the room near the couch to unlock the next level"
Here is a tiny preview of the basic mechanics: https://youtu.be/cUU1HnT95RE
The main feature: you can run multiple language servers simultaneously for the same buffer.
One of the main reasons people stick with lsp-mode over Eglot has been the lack of multi-server support. Eglot is otherwise the most "emacsy" LSP client, so I'm working on filling that gap and I hope it could be merged into Emacs one day.
This is still WIP but I've been using it for a while for Python (basedpyright or pyrefly + ruff for linting) and TypeScript (ts-ls + eslint + tailwind language server).
I've been working on the idea for about a year now. I have put up the funds and set up the corporation. Been busy designing the menu, scouting an ideal location and finding the right front-end staff.
Redesigning investment holdings for wider screens and leaning on hotwired turbo frames. Thankful for once-campfire as a reference for how to structure the backend. The lazy loading attribute works great with css media queries to display more on larger viewports.
Enjoying learning modern css in general. App uses tailwind, but did experiment with just css on the homepage. Letting the design emerge organically from using it daily, prototype with tailwind, then slim it back down with plain css.
So I'm trying to define a multiplication operation using primitive roots.
[0] https://leetarxiv.substack.com/p/if-youre-smart-why-are-you-...
[1] (The other time the US gov put a backdoor in an elliptic curve) https://leetarxiv.substack.com/p/dual-ec-backdoor-coding-gui...
It uses medgemma 4B for analyzing medical images and generating diagnostic insights and reports, ofc must be used by caution, its not for real diagnostics, can be something to have another view maybe.
Currently, it supports chat and report generation, but I'm stuck on what other features to add beyond these. Also experimenting with integrating the 27B model, even with 4bit quantization, looks better than 4b.
The idea is to eventually add more categories like “restaurants,” “theaters,” “roads,” etc., so you can play based on local themes.
I’d love to hear your thoughts - any feedback on what you’d like to see, what feels off, or any issues you run into would be super helpful.
All but 1 prompts were in a 3-block radius IN the city (again, about 20 minutes from my town's town hall).
So the 1 prompt I didn't know I guessed the same 3 block radius as the others, and it was about 2 miles away. Still in the city, not the town I typed in.
It seems like smaller towns will be gobbled up by famous cities elements. Especially here in New England where the majority of 'famous' local things are so few.
edit: also, changing the 'radius' resets the city to where the website THINKS I am instead of where I typed in.
Bug: I tried in my area in the Canary Islands and all the places were off, sometimes in the middle of nowhere or even in the sea.
Also, in small villages, we don't necessarily have a town hall, a library, etc (within selected radius), but the game asked to pin these.
It currently supports complex heatmaps based on travel time (e.g. close to work + close to friends + far from police precincts), and has a browser extension to display your heatmap over popular listing sites like Zillow.
I'm thinking of making it into an API to allow websites to integrate with it directly.
It's largely finished and functional, and I'm now focused on polish and adding additional builtin functions to expand its capabilities. I've been integrating different geometry libraries and kernels as well as writing some of my own.
I've been stress-testing it by building out different scenes from movies or little pieces of buildings on Google Maps street view - finding the sharp edges and missing pieces in the tool.
My hope is for Geotoy to be a relatively easy-to-learn tool and I've invested significantly in good docs, tutorials, and other resources. Now my goal is to ensure it's something worth using for other people.
It'll work in sessions where first everyone can suggest games, then in the second phase veto out suggestions, then vote and it'll display the games with the highest vote. You can also manage/import a list of your games and it'll show who owns what. It's geared towards video games, but will work for board games too. Hope to release it for everyone in the next weeks.
It can process a set of 3-hour audio files in ~20 mins.
I recorded a demo video of how it works here: https://www.youtube.com/watch?v=v0KZGyJARts&t=300s
[1] https://github.com/naveedn/audio-transcriber
I alluded to building this tool on a previous HN thread: https://news.ycombinator.com/item?id=45338694
Thanks for building this. I am trying to set it up but facing this issu
> `torch` (v2.3.1) only has wheels for the following platforms: `manylinux1_x86_64`, `manylinux2014_aarch64`, `macosx_11_0_arm64`, `win_amd64`
The insight: your architecture diagram shouldn't be a stale PNG in Confluence. It should be your war room during incidents.
Going to be available as both web app and native desktop.
After acquiring a flight school, I quickly realized how challenging the day-to-day operations were. To solve the problems of aircraft fleet management, scheduling, and student course progress tracking, I developed a comprehensive platform that handles all aspects of running a flight school. Existing software is often outdated and expensive, offering poor value for its high cost. FlightWise was built off the real world experiences of my own school, where it has delivered immediate and invaluable benefits to our entire team, from students to administrative staff. We've just recently started to offer this platform publicly to other flight schools.
Taking a break from tech to work on a luxury fashion brand with my mum. She hand paints all the designs. I it first collection is a set of silk scarves and we’re moving into skirts and jackets soon.
Been a wonderful journey to connect with my mum in this way. And also to make something physical that I can actually touch. Tech seems so…ephemeral at times
Some earnest and unsolicited feedback on the website: the scroll-based transition is not really working well, looks very jumpy in Safari/MacOS, maybe interpolating between states will help smooth it out. Design-wise, the blur effect is quite jarring, and the product list screams Shopify store and not luxury brand. You already have pretty good photography, I'd feature the portraits heavily instead of the flat product shot. Invest in great typography.
I was incidentally browsing for a new wallet recently and think this might be good inspiration: https://secrid.com/en-nl/collections/carry-with-confidence/. Wish you and your mother success!
I'm calling it a "Micro Functions as a Service" platform.
What it really is, is hosted Lua scripts that run in response to incoming HTTP requests to static URLs.
It's basically my version of the old https://webscript.io/ (that site is mostly the same as it was as long as you ignore the added SEO spam on the homepage). I used to subscribe to webscript and I'd been constantly missing it since it went away years ago, so I made my own.
I mostly just made this for myself, but since I'd put so much effort into it, I figure I'm going to try to put it out there and see if anyone wants to pay me to use it. Turns out there's a _lot_ of work that goes into abuse prevention when you're code from literally anyone on the internet, so it's not ready to actually take signups yet. But, there is a demo on the homepage.
I'm not sure I would have actually started building this if I knew that was an option. Hopefully their existence is telling me "there's a market, go for it" and not "there's already a better alternative, don't do it", heh. Though their pricing tiers really tells me I need to optimize my sandboxing. I don't think I can match those request limits at those prices just from the CPU cost of my per-request sandboxing overhead.
Not sure what the market is for something like this but it's something I've been thinking a lot about since stepping down as CEO of my previous company.
My goal is two-fold:
1. Help teams make better, faster decisions with all context populating a source-of-truth.
2. Help leaders stay eyes-on, and circumstantially hands-on, without slowing everything down. What I'd hope to be an effective version of "Founder Mode".
If anybody wants to play around with it, here's a link to my staging environment:
https://staging.orgtools.com/magic-share-link/5a917388cf19ed...
I've added it to SaaSHub saashub.com/orgtools. If you have an @orgtools.com email you can verify and improve the profile. Cheers!
I originally had "less meetings" before an LLM corrected me into using "fewer meetings". Then when talking about Orgtools to a couple people I heard them say "less meetings" and switched back thinking that sounds slightly more natural (but incorrect).
> Less has been used to modify plural nouns since the days of King Alfred
https://www.merriam-webster.com/dictionary/less
More reading on Wikipedia: https://en.wikipedia.org/wiki/Fewer_versus_less
I use AI coupled with Semantic Caching (cost control), NLP and content moderations that would speed up community management for users. It has a manual mode, semi agentic mode and fully agentic. It not only generates responses it contextualises it and understands from your comments who the key contributors are and how you're being perceived.
It can help you identify your promoters (potential customers) and also guide you content that works. It's essentially an evolution from the current community management tools and AI tools which currently focus on content generation. It speeds up community management massively with the advanced filtiration and ML in the backend.
What do you guys think?
It's a real life treasure hunt in the Blue Ridge Mountains with a current total prize of $31,200+ in gold coins and a growing side pot.
I modeled it off of last year's Project Skydrop (https://projectskydrop.com) which was in the Boston area.
* Shrinking search area (today, Day 5, it will be 160 miles, on Day 21 it'll be just 1 foot wide)
* 24/7 webcam trained on the jar of gold coins sitting on the forest floor just off a public hiking trail
* Premium upgrades ($10 from each upgrade goes towards the side pot) for aerial photos above the treasure and access to a private online community (and you get your daily clues earlier)
* $2 from each upgrade goes towards the goal of raising $20k for continued Hurricane Helene relief
So far the side pot is $6k and climbing.
It's been such a fun project to work on, but also a lot of work. Tons of moving parts and checking twice and three times to make sure you've scrubbed all the EXIF data, etc.
did you do any math around predicting if 'donating' this gold to a treasure hunter would yield an even greater amount to hurricane relief?
So, I built it.
Using ChatGPT's voice agents to generate Github issues tagging @claude to trigger Claude Code's Github Action, I created https://voicescri.pt that allows me to have discussions with the voice agent, having it create issues, pull requests, and logical diffs of the code generated all via voice, hands free, with my phone in my pocket.
Are you reviewing code by voice, like a blind programmer? Have you tried Emacspeak? I know that's not normally hands-free.
At least 50% of the available storage is tools to fix mechanical, plumbing, and electrical issues that appear, because it is inevitable given the mileage we are putting on. Definitely living a Zen and the Art of Motorcycle Maintenance lifestyle at the moment.
Code review absolutely happens via voice. The voice agent has access to the pull requests and code diffs and is able to reason about the changes and explain them to me.
I'm also playing around with a tool that opens up a visual preview of the Vercel/Netlify preview deploys so I can explore the (web) changes handsfree.
Emacspeak is new to me. I'll look into it. Thanks!
Emacspeak has enabled at least some blind programmers to work efficiently, so I think its auditory code formatting abilities are usably good, which probably isn't the case for all screen readers.
It pulls down up to 400 emails for each custom label and creates a custom model just for you, that will label new incoming email.
For emails that are likely, but not certain to be a particular label, I use a 'Proposed/{label}' approach which lets you just archive them in Gmail, and it will detect that they've been archived with the proposed label and move them to the correct label. (Essentially using the archive action as an acceptance criteria.) Similarly I use re-labeling by the user as a negative signal, and include that data as a counter-example.
It's working well for my own accounts, and the back-end is pretty legendary, but Google requires a hefty cost to audit security in order to turn it into a real product.
It always frustrated me that Google won't use their ML systems to label emails for me based on what I've done before. So I scratched that itch.
I'm using very straightforward BERT models right now, but I'm exploring using something a little more intelligent. I'm also exploring a multi-stage process, because a lot of emails can be categorized using much simpler techniques.
It's a great Machine Learning project, with a back-end that really runs spectacularly on Temporal and Kubernetes, and it's useful to me, so...wins all around.
I do wish I could make it a product, though.
Link: https://ohyahapp.com
Interesting challenge was designing for minimal distractions while keeping setup simple for parents. Timer-locked navigation so kids can see what's next but can't start other tasks or switch profiles. Also refactored from schedule-centric (nightmare to maintain) to task-definitions as first-class citizens, which made creating schedules way easier
React Native/Expo + Firebase. On the App Store after months of dogfooding with the family
What a neat tool!
- A front-end library that generates 10kb single-html-file artifacts using a Reagent-like API and a ClojureScript-like language. https://github.com/chr15m/eucalypt
- Beat Maker, an online drum machine. I'm adding sample uploads now with a content accessible storage API on the server. https://dopeloop.ai/beat-maker
- Tinkering with Nostr as a decentralized backend for simple web apps.
http://github.com/patched-network/vue-skuilder, docs-in-progress at https://patched.network/skuilder
I am using this stack now to build an early literacy app targeting kids aged 3-5ish at https://letterspractice.com (also pre-release state, although the email waitlist works I think!). LLM assisted edtech has a lot of promise, but I'm pretty confident I can get the unit cost for teaching someone to read down to 5 USD or less.
It's like inventing the refrigerator and all the brochure talk about is the internal engineering of the machine, rather than how keeping food cold is useful from the economic and culinary perspectives.
My focus on that front is the LettersPractice app. I taught my own kids (6, 4) to read using early versions of the same software, and I'm pretty confident about the efficacy of the approach.
As far as the broader project moving toward being a consumer facing applications, there are a few options.
The existing platform-ui is a skeleton / concept sketch of one category. A web platform that allows users to create and subscribe to different courses, and then study sessions aggregate content from all subscribed courses. reddit for knowing stuff and having skills, rather than .
Another broad category is in NoCode ITSaaS (interactive tutoring system as a service?) platform. EG, a specialized bolt.new for EdTech that uses agentic workflows to create courses that cover a given domain or specific input documents (eg, textbooks, curriculum documents).
Very interested in this sort of stuff.
Should be working now.
Really appreciate the interest.
It is a tool that lets you create whiteboard explainers.
You can prompt it with an idea or upload a document and it will create a video with illustrations and voiceover. All the design and animations are done by using AI apis, you dont need any design skills.
Here is a video explainer of the popular "Attention is all you need" paper.
https://www.youtube.com/watch?v=7x_jIK3kqfA
Would love to hear some feedback
The animations / drawings themselves are solid too. I think there's more to play with wrt the dimensions and space of the background. It would be nice to see it zoom in and out for example.
how does it work with long papers? will it ever work with small books?
will try it out tomorrow again
yes it should work.
> i can’t upload the document
Could you please drop an email to rahul at magnetron dot ai with the document. I will set things up for you
I've created two open-source solutions, one which uses a VM (https://github.com/webcoyote/clodpod) and another which creates a limited-user account with access to a shared directory (https://github.com/webcoyote/sandvault).
Along the way I rolled my own git-multi-hook solution (https://github.com/webcoyote/git-multi-hook) to use git hooks for shellcheck-ing, ending files with blank lines, and avoid committing things that shouldn't be in source control.
It's an API that allows zero-knowledge proofs to be generated in a streaming fashion, meaning ZKPs that use way less RAM than normal.
The goal is to let people create ZKPs of any size on any device. ZKPs are very cool but have struggled to gain adoption due to the memory requirements. You usually need to pay for specialized hardware or massive server costs. Hoping to help fix the problem for devs
I’d love to play with simple ZKP algos.
In the AI macro food logging world, there's really only Cal AI which estimates macros based on an image. I use cronometer personally, and it's super annoying to have to type everything in manually, so it makes sense why folks reach for something like Cal AI. However, the problem with something like Cal AI is accuracy. It's at best a guess based on the image. Macros for humans tries to be more of a traditional weigh your food, log it, etc kind of app, while updating the main interface for how users input that info into something more friendly.
I set myself a hard deadline to present a live demo at a local showcase/pitch event thing at the end of the month. I bet the procrastination will kick in hard enough to get the backend hosted with a proper database and a bit more UI polish running on my phone. :-)
Here's a really early demo video I recorded a few weeks ago. I had just spoken the recipe on the left and when I stop recording you can see my backend streams the objects out as they're parsed from the LLM https://www.youtube.com/watch?v=K4wElkvJR7I
args:
username str # Required string
password str? # Optional string
token str? # Optional auth token
age int # Required integer
status str # Required string
username requires password // If username is provided, password must also be provided
token excludes password // Token and password cannot be used together
age range [18, 99] // Inclusive range from 18 to 99
status enum ["active", "inactive", "pending"]
Rad does all the arg parsing for you (unlike Bash), including validation for those constraints you wrote, and you can get on with writing the rest of your script is a nice, friendly syntax!Very keen for feedback so if any of that sounds interesting, feel free to give it a go!
Recent focus has been on geolocation accuracy, and in particular being able to share more data about why we say a resource is in a certain place.
Lots of folks seem to be interested in this data, and there's very little out there. Most other industry players don't talk about their methodology, and those that do aren't overly honest about how X or Y strategy actually leads to a given prediction, or the realistic scale or inaccuracies of a given strategy, and so on. So this is an area I'm very interested in at the moment and I'm confident we can do better in. And it's overall a fascinating data challenge!
I'm still rebuilding OnlineOrNot's frontend to be powered by the public REST API. Uptime checks are now fully powered by a public API (still have heartbeat checks, maintenance windows, and status pages to go).
Doing this both as a means of dogfooding, and adding features to the REST API that I easily dumped into the private GraphQL API without thinking too hard. That, and after I finish the first milestone (uptime checks + heartbeat/cron job monitors), I'll be able to start building a proper terraform provider, and audit logs.
Basically at the start of the year I realised GraphQL has taken me as far as it can, and I should've gone with REST to start with.
Most recipes are a failure for beginners on the first try. I aim to make recipes bulletproof so anyone can pick up any recipe and it will just work.
The goal is to make the best recipe app ever. On a technical level recipes are built as graphs and assembled on demand. This makes multilanguage support easy, any recipe can use any unit imaginable, blind people could have custom recipe settings for their needs, search becomes OP, and there is also a wikipedia like database with information that links to all recipes. Because of the graphs; nutritional information, environmental impact, cost etc. can simply be calculated accurately by following linked graphs. Most recipe apps are very targeted to specific geographical regions and languages, this graph system removes a lot of barriers between countries and will also be a blessing to expats. Imagine an American in Europe that wish to use imperial units, english recipes, but with ingredients native to their new homeland. No problem, just follow a different set of nodes and the recipe is created that way for them.
The website is slightly outdated but gives a good idea of what is coming. Current goal is to do beta launch in 2026.
From the marketing side...
I'd make a selection on the website on first visit - I'm a chef / creator - I like to cook
Your cta (call to action) is... Not very effective
Instagram only has 7 followers and no posts. ...
I like the dedication but I'd definitely recommend to improve your marketing / promotion skills (if you build it they will come is a myth unfortunately...), if you wanna have a call about it feel free to hit me up, tijlatduckdotcom. I'm also in Europe so easy for timing.
The goal is to make it straightforward to design and deploy small, composable audio graphs that fit on MCUs and similar hardware. The project is in its infancy, so there’s plenty of room for experimentation and contributions.
Are you thinking about supporting deployment on FPGAs like the iCE40 line?
It's been a great project to understand how design depends on a consistent narrative and purpose. At first I put together elements I thought looked good but nothing seemed to "work" and it's only when I took a step back and considered what the purpose and philosophy of the design was that it started to feel cohesive and intentional.
I'll never be a designer but I often do side projects outside my wheelhouse so I can build empathy for my teammates and better speak their language.
It’s going to feature a synchronous IPC model where the inter-task ‘call graph’ is known at compilation. Function call semantics to pass data between tasks. Call() recieve() reply()
A build tool that reads TOML will generate the kernel calls so that tasks can be totally isolated — all calls go though supervisor trap so we have true memory isolation.
Preemptions are possible but control is yielded only at IPC boundary so it’s not hard realtime.
So that makes things super robust and auditable behavior at compile time. Total isolation means tasks can crash catastrophically without affecting the rest of the system. Big downsides are huge increase in flash usage, constrained programming model, complex build system, task switching overhead. Just a very different model than what I’m used to at $dayjob.
I want to basically find out, hey what happens when we go full safety!? What’s hard about it? What tradeoffs do we need to make? And also kinda like what’s a different model for multitasking. Written in Rust of course.
It’s offline-first and totally local (no cloud, no tracking). You just drop in your accomplishments, metrics, or files as you go, and later it helps you summarize them (ML model that runs locally) for performance reviews, promotions, or interviews.
Basically built it because I got tired of trying to reconstruct a year’s worth of work from Slack and Jira the night before review time.
It’s a one-time $0.99 download for macOS, Windows, and Linux.
Curious if others have tried building similar “career memory” tools or have thoughts on what’s missing.
Would love feedback from people who’ve struggled to keep track of their work accomplishments or prep for performance reviews!
1. filter via tags , one has to manually tag the entries though.
2. filter by company -- maybe way too broad based
3. specific word search.
4. filter by date range -- probably this + tags should help one find those entries which happened last quarter etc
Open to considering other ideas! So glad to see someone being excited about this!!
- 3D visualization of sea surface temps over time, very much a work in progress: https://globe-viz.oberbrunner.com
- Also a Deep Time log-scaled timeline of the history of the universe at https://deep-timeline.org
We’re working directly with partner housing unions and charities in Britain and Ireland to build the first central database of rogue landlords and estate agents. Users can search an address and see if it’s marked as rogue/dangerous by the local union, as well as whether you can expect to see your deposit returned, maintenance, communication - etc.
After renting for close to a decade, it’s the same old problems with no accountability. We wanted to change this, and empower tenants to share their experiences freely and easily with one another.
We’re launching in November, and I’m very excited to announce our partner organisations! We know this relies on a network effect to work, and we’re hoping to run it as a social venture. I welcome any feedback.
https://apps.apple.com/us/app/teletable-football-teletext/id...
So I started https://github.com/vicentereig/dspy.rb: a composable, type-safe version built for Rubyists who want to design and optimize prompts, and reuse LLM pipelines without leaving their language of choice. Working with DSPy::Signatures reminds me a bit of designing a db schema with an ORM.
It’s still early, but it already lets you define structured modules, instrument them in Langfuse, wire them up like functional components, and experiment with signature optimization. All in plain Ruby.
I enjoy exploring AI-assisted process development. Things I've learned are:
* Building nôn pipelines for generating placeholder art. GPT-5 excels at writing code nodes.
* LLM for LaTeX generation (note: GPT-5 generally fails at even moderately complex LaTeX and often adds complexity while endlessly orbiting the problem without solving it. On of the other hand, it's failed solutions eventually give me enough datapoints to develop the correct solution on my own. Probably faster than learning enough LaTeX to develop them on my own. Sadly I can fix broken LaTeX much easier than writing correct LaTeX.
* Using LaTeX to read card data out of CSVs is difficult to get right (esp with embedded LaTeX) but honestly pretty good for rapid prototyping.
* Searching games for mechanics to solve specific interaction problems is an ok-ish problem for GPT-5. I wish I had a vector DB of indexed PDF manuals for the top 5k bgg games, but that's too big of a problem to solve.
It's incredibly cheap and easy to generate reasonable approximations of thematic text sufficient for a prototype.
Overall, I went from idea to first round of playtesting in about 2-3 weeks. Previous similar attempts had been 2-3 months. I feel quite a bit less attached to darlings, but the downside is a bit of shame attached to having playtesters work around errors originating from AI usage rather than errors generated by my own oversights
Create REST APIs for PostgreSQL databases in minutes.
- one man project (me) - been doing it well over a year now - no sponsorship, no investors, no backers, no nothing just my passion - I haven't even advertised much, this may first ir second time I'm sharing a link - On a weekdays im building a serious stuff with it - On weekends preparing a new major version with lessons learned from doing a real project with it
Not going to stop. But I migh be seeking sponsors in future, not sure how that will turn out. If not that's ok, I'm cool to be only user.
Some are small tech jokes, while others were born from curiosity to see how LLMs would behave in specific scenarios and interactions.
I also tried to use this collection of experiments as a way to land a new job, but I'm starting to realize it might not be serious enough :)
Happy to hear what you think!
Recently I started executing the upstream spec tests against it, as a means to increase spec conformance. It's non-streaming, which is a non-starter for many use cases, but I'm hoping to provide a streaming API later down the road. Also, the errors interface are still very much WIP.
All that said, it's getting close to a fully-conformant one and it's been a really fun project.
P.S. I'm new to the language so any feedback is more than welcome.
Sign up for my waitlist (or DM me if you want to know more) here: https://www.getsnapneat.com
I'm curious what sets your app apart?
One thing that I miss in MacroFactor is that it should have some memory of my previous choice.
Example: If I take a picture of a glass of milk, it always assumes it to be whole milk (3.5% fat). Then I change it to a low fat milk (0.5% fat). But no matter how many times I do that, it keeps assuming that the milk in the photo is whole milk.
Recently I've managed to port the game onto a real-world cyberdeck, the uConsole. [1]
[0] https://store.steampowered.com/app/3627290/Botnet_of_Ares/
We have a fun group working on it on Discord (find the discord invite in the How To)
Break down your software requirements (Userdoc guides you through the process), refine/confirm, setup your technical specs, coding/business guidelines & guardrails, and then create development plans (specs) which can be easily consumed by coding agents via MCP, or by platforms like Lovable / v0 using Markdown. Working on Cursor background agent integration atm.
It’s an iOS app to help tracking events and stats about my day as simple dots. How many cups of coffee? Did I take my supplements? How did I sleep? Did I have a migraine? Think of it like a digital bullet journal.
Then visualizing all those dots together helps me see patterns and correlations. It’s helped me cut down my occurrence of migraines significantly. I’m still just in the public beta phase but looking forward to a full release fairly soon.
Would love to hear more feedback on how to improve the app!
For work, https://heyoncall.com/ as the best tool for on-call alerting, website monitoring, cron job monitoring, especially for small teams and solo founders.
I guess they both fall under the category of "how do you build reliable systems out of unreliable distributed components" :)
Working on faceted search for logs and CLI client now and trying to share my progress on X.
The current challenge is the display. I’ve struggled to learn about this part more than any other. After studying DVI and LVDS, and after trying to figure out what MIPI/DSI is all about, I think parallel RGB is the path forward, so I’ve just designed a test PCB for that, and ordered it from JLCPCB’s PCBA service.
[0] https://github.com/stryan/materia and/or https://primamateria.systems/
https://github.com/tomaytotomato/location4j
I think I am going to re-write the logic to calculate a score on all matches it makes from a given piece of text.
e.g.
"us ca" ---> is this "USA California" or "USA and Canada (CA ISO2 code)"?
"san jose usa" ---> is this "San Jose California, USA" or another San Jose in America
There are some Amish people who rebuild Dewalt, Milwaukee etc battery packs. I'd like a repairable/sustainable platform where I can actually check the health of the battery packs and replace worn out cells as needed.
To give you an idea of the market, original batteries are about $149, and their knockoffs are around $100.
Battery-powered hand tools are heavier, clumsier, generally of lower quality, less power and are less long-lived than AC-powered tools.
To be honest, there's a little Amish in me: I have hand-powered tools as backup for all my AC tools.
I've been wondering for a while if the display on ebikes could also be a more open and durable part of it.
1. I shared the app with the small audience I have and received some feedback in very unexpected places. First, it was hard to understand how lists work because putting things into lists was an unobvious process. I fixed that by adding DnD that works well both with mouse and touch (turned out it’s two separate APIs). Second, users thought that the screenshot on the quite minimal landing page was the real app, and they clicked on it. The problem was so frequent and surprising that I decided to add something funny for people who do that, as I’m not willing to contribute a lot of time to the landing right now.
2. I underestimated how bad discoverability on the internet is. My expectation was that I would make my site fully server-side rendered, add a basic sitemap to Search Console, and have a few dozen organic users during the pre-holiday season when users are filling their wishlists. In reality, I got zero — not even users, but even visits. So I started actually working on SEO, no black magic but just adding slightly more complex sitemaps, micro-markup, and other stuff which I thought only products competing for the first page would need.
My next steps are to work on getting some minimal organic inflow of users and improving stuff related to auth and user management, which is the most time-consuming part of the work right now.
Drones are real bastards - there's a lot of startups working on anti drone systems and interceptors, but most of them are using synthetic data. The data I'm collecting is designed to augment the synthetic data, so anti drone systems are closer to field testing
I started my program in Swift and SwiftUI, although for various reasons I'm starting to look at Dart and Flutter (in part because being multiplatform would be beneficial, and in part because I am getting the distinct feeling this program is more ambitious than where SwiftUI is at currently). It isn't a direct port of Dramatica by any stretch, instead drawing on what I've learned writing my own novels, getting taught by master fiction writers, and being part of writing workshops. But no other program that I've seen uses Dramatica's neatest concepts, other than Subtxt, a web-based, AI-focused app which has recently been anointed Dramatica's official successor. (It's a neat concept, but it's very expensive compared to the original Dramatica or any other extant "fiction plotting" program. Also, there's a space for non-AI software here, I suspect: there are a lot of creatives who are adamantly opposed to it in any form whatsoever.)
I really really want something like this, that I can run locally without paying 100$ a year for Arc Studio.
Arc Studio looks like it's a screenwriting program, which is a different animal, though. Have you tried out Fade In (https://www.fadeinpro.com), which is a $79 one-time fee? I haven't used it, I confess, but I hear great things about it. (I used the even cheaper Highland Pro for my one try at screenwriting so far, but that was a learning exercise.)
I find selling commercial software to be really difficult, honestly I'd rather just release my projects for free versus dealing with someone complaining that it doesn't run right, etc.
If you have a mailing list or something I would be glad to sign up! I'll be your first customer.
So far I have a duet mainboard wired up to motors and a commercial gantry set (openbuilds). I've figured out how to wire up a servo control board to a GPIO pin, and the gcode necessary run the servo up and down.
I'm designing and 3d printing parts for the pen gantry, I have a nice rail / slider setup using linear bearings. I'm almost done working out how the pen holder fits into my gantry setup but I'm struggling a little bit getting this past the finish line.
I already figured out how to generate custom GCODE that takes into account the needs of having no z axis. I need to make a simple web interface that lets me interact with the duet over USB, and this will be running off a raspi. This will allow me more GPIO and flexibility vs just wiring buttons straight to the duet.
I already have some code and logic to generate trace data from bitmap images, I just need to figure out a way to automate it so that the output still looks nice.
Once all that works... if I glue it together I will be able to push button and @robotdrawsyou (https://www.instagram.com/robotdrawsyou)
The goal is to create technology that is indistinguishable from magic. People without the technical understanding of what's going on will just see it as tech junk, but my hope is that by breaking down all the individual parts it will allow people to learn about CNC machines, vector vs raster and what it means for something to actually be a robot.
I still have zero idea how to make money with this. Career is struggling really badly but I am hopeful that what I am working on will allow me to display competency and skill to an employer. That's the fantasy at least.
Merchants who want to sell on Etsy or Shopify either have to pay a listing fee or pay per month just to keep an online store on the web. Our goal is to provide a perpetually free marketplace that is powered solely off donations. The only fees merchants pay are the Stripe fees, and it's possible that at some volume of usage we will be able to negotiate those down.
You can sell digital goods as well as physical goods. Right now in the "manual onboarding" phase for our first batch of sellers.
For digital goods, purchasers get a download link for files (hosted on R3).
For physical goods, once a purchase comes through, the seller gets an SMS notification and a shipping label gets created. The buyer gets notified of the tracking number and on status changes.
We use Stripe Connect to manage KYC (know your customer) identities so we don't store any of your sensitive details other than your name and email. Since we are in the process of incorporating as a 501(c)(3) nonprofit, we are only serving sellers based in the United States.
The mission of the company is to provide entrepreneurial training to people via our online platform, as well as educational materials to that aim.
I want to be able to script prices, product descriptions, things like that. And see them show up in a request on sale.
When you say "algorithmically driven print-on-demand" do you mean that prices would automatically adjust based on inventory? Or like, how do you mean.
Also, when you say "see them show up in a request on sale" — can you clarify? I interpret this to mean you want a webhook triggered when an order comes in.
I'm trying to get it polished up for an initial release, including some GitHub Actions config so people can easily run it in CI.
I'm a robotics engineer by training, this is my first public launch of a web app.
Try it: https://app.veila.ai (free tier, no email required)
- What it is:
- Anonymous AI chat via a privacy proxy (provider sees our server, not your IP or account info)
- End‑to‑end encrypted history, keys derived from password and never leave your device
- Pay‑as‑you‑go; switch models mid‑chat (OpenAI now; Claude, Gemini and others planned)
- Practical UX: sort chats into folders, Markdown, copyable code blocks, mobile‑friendly
- Notes/limits:
- Not self‑hosted: prompts go to third‑party APIs
- If you include identifying info, upstream sees it
- Prompts take a bit long sometimes, because reasoning is set to "medium" for now. Plan to make this adjustable in the future.
- Looking for feedback:
- What do you need to trust this? Open source? Independent audit?
- Gaps in the threat model I'm missing
- Which UI features and AI models you'd want next
- Any UX rough edges (esp. mobile)
- Learn more:
- Compare Veila to ChatGPT, Claude, Gemini, etc. (best viewed on desktop): https://veila.ai/docs/compare.html
- Discord: https://discord.gg/RcrbZ25ytb
- More background: https://veila.ai/about.html
Homepage: https://veila.aiHappy to answer any questions.
In this space, it is more about trust and what you have done in the past more than anything else. Audits and whatnot are nice, but I need to be able to trust that your decisions will be sound. Think how Steam's Gabe gained his reputation. Not exactly easy feat these days.
FWIW, favorited for testing.
I'd love to hear your feedback if you get around to test Veila, e.g. on hey@veila.ai.
I'm putting a bunch of security tools / data feeds together as a service. The goal is to help teams and individuals run scans/analysis/security project management for "freemium" (certain number of scans/projects for free each month, haven't locked in on how it'll pan out fully $$ wise).
I want to help lower the technical hurdles to running and maintaining security tools for teams and individuals. There are a ton of great open source tools out there, most people either don't know or don't have the time to do a technical deep dive into each. So I'm adding utilities and tools by the day to the platform.
Likewise, there's a built in expert platform for you to get help on your security problems built into the system. (Currently an expert team consisting of [me]). Longer term, I'm working on some AI plugins to help alert on CVEs custom to you, generate automated scans, and some other fun stuff.
https://meldsecurity.com/ycombinator (if you're interested in free credits)
- auto-syncing of your playlists to the server and locally (you can also download your playlists as .json or .txt)
- auto-shuffle is included
- complete support for mobile browsers
- the ability to play the next item (and previous item) without having to reload the page
- you are not limited to what you can listen to with an account
- you can share playlists with others
- access to my media player is 100% FREE, you do not have to pay $30yr - $600/yr as such with Kottke
Also, Underscore gives you a random song on reload whereas my project works exactly like an MP3 player and even leaves off at the exact spot on the item you were just listening to on reload. As stated on that page, it was a vibe coded Claude project which is most likely why they are missing so many features.
These are all things that would drive my crazy if they were not on an MP3 player which is why I made this solution.
I'm working on Habitat. It's a free and open source, self-hosted platform for communities to discover and discuss their local area. The plan is for it to be federated.
Stuck on a frustrating little bug at the moment, once I get through that I feel like I'll probably work on some CI utils to ensure that the code stays up-to-scratch and tag the first release.
- The idea: https://carlnewton.github.io/posts/location-based-social-net...
- A build update and plan: https://carlnewton.github.io/posts/building-habitat/
- The repository: https://github.com/carlnewton/habitat
- The project board: https://github.com/users/carlnewton/projects/2
My college roomate and I are fine-tuning an open source LLM to detect prompt injection and other LLM targeted attacks. We made a game you can play here to try to get the LLM to give up a secret code. So far only 1 person has been able to crack it. Checkit it pit here: https://www.integrated.io/
One thing I've come to understand about the process is that becoming a self-sufficient author is very similar to being a startup - only you're a startup of yourself. Building the product, building your market, building the infrastructure, watching what your other 'startups' are doing - it's all part of the process.
In the past days, I've released one of my stories as a 'Make Your Own Adventure' - https://inkican.com/make-your-own-scifi-adventure-now/
I've also integrated Bluesky into my blog comments: https://inkican.com/test-driving-a-new-comment-platform-blue...
And I've created some free STEM content for K-6 teachers on the future of space elevators: https://inkican.com/space-elevators-getting-to-the-next-leve...
Going to continue with my ideas and I appreciate the chance to tell you about them. Thank you. :)
25-Hydroxyvitamin D, also known as calcidiol, regulates calcium absorption in the intestines, promotes bone formation and mineralization, and supports immune function.
Apolipoprotein B (ApoB) is a protein that binds to LDL receptors on cells, allowing lipoproteins to deliver cholesterol and triglycerides to tissues for energy or storage.
Lipoprotein(a) is a low-density lipoprotein variant identified as a risk factor for atherosclerosis and related diseases, such as coronary heart disease and stroke.
etc.
One of the best I’ve seen in this thread!
Good luck with your mission!
Built a ton of side projects in the last 5 years with none of them going anywhere. Realized the distribution is a problem. So after reading a bunch of marketing books, I decided to give SEO a try. And after some playing around realized there is potential there and decided to automate this process for all my projects.
Keep struggling with moving away from building more features and focusing on marketing.
Recorded a demo today, will start cold-emailing and reaching out to people who hhopefully will find it useful.
I started changing some thing last year adding micro services with NodeJs
I am using VueJS for the new editor and Laravel for the back-end. Added several features that had plan over the years. I am 98% there and mostly prepping for the migration. Will switch to subscription and add couple of different plans.
The challenge is how ChatGPT can understand your "query" or say "prompts"? Raw data is not good enough - so I try to use a term called "AI Understanding Score" to measure it: https://senify.ai/ai-understanding-score. I think this index will help user to build more context so that AI can know more and answer with correct result.
This is very early work without every detail considered, really would like to have your feedback and suggestions.
You can have a try with some MCP services here: https://senify.ai/mcp-services
Thanks.
After using evil-mode and meow, this is a system I've come up with that addresses issues I ran into with both.
I think app icons are an underrated artistic format, but they’ve only been used for product logos. I made 001 to explore the idea of turning them into an open-ended creative canvas. There are 99 “exhibit spaces” in the gallery, and artists can claim an exhibit to install art within. Visitors purchase limited-edition copies of pieces to display as the app’s icon, the art’s native format.
It’s a real-money marketplace too - the app makes money by taking commission of sales (Not crypto). I like economic simulation games and I think the constraints here could be interesting.
I’m currently looking for artists to exhibit in the gallery, if anyone is interested, or knows someone who may be, please let me know!
Making it with the Rust game engine, Bevy and really enjoying it so far. Using Blender for making assets. I'm maybe a dumbass for making it as my first game, but I just don't really get excited by smaller projects.
Overall I've found modern games to be (1) overstimulating and (2) have algorithms in the background to keep me engaged that I don't trust (see: free to play model)
Last month was an improvement. This month I can't concentrate for long and I distract very easily, but I seem to be able to do more with what I have, A small sense of ambition that I might be able to do bigger things, and might not need to drop out of tech and get a simple job, is returning.
I am trying to use this inhibited, fractured state to clarify thoughts about useless technology and distractions, and about what really matters, because (without wishing to sound haughty) I used to be unusually good at a lot of tech stuff, and now I am not. It is sobering but it is also an insight into what it might be like to be on the outside of technology bullshit, looking in.
Currently I'm spending numerous hours trying to package for multiple Linux distributions. I have to say that building for Ubuntu using the Debian build system and Launchpad seems like a way to spend days for nothing except frustration :) Maybe the problem is also me / PEBKAC
The main idea is to gather tech articles in one place and process them with a LLM — categorize them, generate summaries, and try experimental features like annotations, questions, etc.
I hope this service might be useful to others as well. You can sign up with github account to submit your articles as well.
Made primarily for my friend's coffee shop. Data is stored locally, and the app is fully functional when offline. There is an optional "syncing" feature to sync your data with multiple devices which requires a sign up. This is a Progressive Web App built with Web Components. The syncing is made possible with PouchDB/CouchDB.
I still have to write (or screen record) a Getting Started guide but the app is ready for use nonetheless.
The idea is to enable a comment section on any webpage, right as you’re browsing. Viewing a Zillow listing? See what people are excited about with the property. Wonder what people think about a tourist attraction? It’ll be right there. Want to leave your referral or promo code on a checkout page for others? Post it.
Not sure what the business model will look like just yet. Just the kind of thing I wish existed compared to needing to venture out to a third party (traditional social media / forums etc) to see others’ thoughts on something I’m viewing online. I welcome any feedback!
Keep in mind I’d only be storing comments and the references to where they’re posted. I don’t need to know the webpages ahead of time at all.
Besides the LLM experimentation, this project has allowed me to dive into interesting new tech stacks. I'm working in Hono on Bun, writing server-side components in JSX and then updating the UI via htmx. I'm really happy with how it's coming together so far!
My current prototype scans potential lookalikes for a target domain and then tracks DNS footprint over time. It's early, but functional - and makes it easier to understand if some lookalike domain is looking more "threat-y".
I've also been working on automating the processing of a parent-survey response for my kid's school using LLMs. The goal is to produce consistent summarization and statistics across multiple years and provide families with a clearer voice and helping staff and leadership at the school best understand what things have been working well (and where the school could improve).
There are several really good products in this space FYI, but a new angle I'm sure can be competitive.
A unified platform for product teams to announce updates, maintain a changelog, share roadmaps, provide help documentation and collect feedback with the help of AI.
My goal is to help product teams tell users about new features (so they actually use them), gather meaningful feedback (so they build the right things), share plans (so users know what's coming), and provide help (so users don't get stuck).
Doing it as an indie hacker + solo founder + lean. Started 13 days ago. Posting about my journey on Youtube every week day https://www.youtube.com/@dave_cheong
Trying to fix this problem with Eternal Vault.
Link: https://eternalvault.app
With that being said, with the paid plans unlock higher limits, advance features (all mentioned on the pricing page).
Let me know if you have any specific questions, also feel free to DM me on my socials or via emails (all available on the contact page).
I was motivated to build this as I found that many great personal finance and budget apps didn't offer integrations with the banks I used, which is understandable given the complexity and costs involved, so I wanted to tackle this problem and help build the missing open banking layer for personal finance apps, with very low costs (a few dollars a month) and a very simple api, or built-in integrations.
Still working on making this sustainable, but been quite a learning experience so far, and quite excited to see it already making a difference for so many people :)
Given two Goodreads accounts, BookBlend uses a combination of web scraping, data analysis, and LLMs to calculate a blend score from 0-100. It shows you shared books, authors, and genres, as well as recommends books for the two to read together!
It's 100% free, and the source code is available on the "info" modal in the top right.
The control panel was built from scratch. I used an ESP32 board to detect inputs from the buttons, as well as drive the RGB LEDS in the buttons. Th ESP32 outputs keyboard keypresses, and accepts serial input over USB to change the color of the buttons. My goal here is to have the buttons illuminate based on which game is loaded.
Lastly, I embedded a stream deck module in the control panel for auxiliary functions. For this, I built a node app to operate the stream deck given the lack of Linux support from Elgato. I'm going to put together a big blog post once I get closer to completion, but for now, here are some photos.
https://www.icloud.com/photos/#/icloudlinks/00cTcyTxaASpQ3t1...
The event manager has a built-in ELO-style rating system designed for computer games and applies it to offline sports. You tap on the winning pair and it automatically calculates new ratings offline. It takes the attendees from the event automatically, and can manage taking turns evenly, placing players of similar level together and balancing matches. It can also do all of that while handling players who come in and out at random times, or matchmaking for fixed pairs.
So what problem does it solve? The event manager is a solution for anyone who organizes slightly larger groups of players. So instead of a a paddle queue or random-round robin tables, an event can be managed by level at a very granular level fully automatically. Players just go to the court they are assigned and they'll be put in a balanced competitive match automatically. At the end of the match the players just tap on the winning team. I think commercial facilities or clubs could also benefit from using this as it can do matchmaking for "open play" style events, but this is an open source side project and I have no intention of making any sales pitches.
https://www.pkuru.com - please check it out and if you are visiting Tokyo or Bangkok you can join the Pickleball events straight from the website.
The rough overview is on my X post here: https://x.com/BobAdamsEE/status/1965573686884434278
It's a long running process, and the HW is mostly defined (but not laid out) but on pause while I work on porting TockOS to an ATSAMV71 to make sure I won't run into any project ending issues with the SW before I build the hardware.
I’m also working on the next version of HacKit, a macOS Hacker News reader focused on simplicity, performance and adherence to macOS design language. It’s already available on the App Store: https://apps.apple.com/app/id1549557075 and further information can be found here: https://github.com/anosidium/HacKit-Feedback-And-Support.
I’d be delighted to hear any feedback, suggestions, or ideas — or simply to connect with others working on Apple platform development.
I'm currently working on language detection to expand the "facts" with a language distribution on each generated label.
I tried getting Adsense on the website but I keep getting denied on "Low Content Value" grounds. I tried some alternatives but the quality of their ads was ridiculous (stuff like "your device has a virus, click here to clean it up") so I just gave up on that.
Here[2] you can find what one of the labels looks like and many more from some user submissions
[0] - https://listeningfacts.com/
[1] - https://developer.spotify.com/blog/2025-04-15-updating-the-c...
[2] - https://www.reddit.com/r/lastfm/comments/1mnk5wj/listening_f...
We're a SEC Registered Investment Advisor that combines financial planning with institutional style investment portfolios that are designed to achieve financial goals.
Highlights:
- Full money management platform at the core - expenses, budgeting, tracking across all your accounts. But instead of stopping there, we actually provide wealth building & financial planning tools.
- We model our investment portfolios using forward-looking institutional research at the asset class level and then recommend it to users as public ETFs that track the underlying index. This is extremely low cost for the client and allows us to model & manage custom portfolios unique to each goal that is created.
- No transfers needed, everything is GUIDED through your existing accounts via secure sync. We literally show you what to buy, when to buy it, how much, and (eventually) where to do it in your own brokerage/bank accounts. You stay in control, we just tell you the exact steps.
What's next:
- Building out "financial playbooks" next. Think step-by-step guided modules that walk you through achieving specific goals (building an emergency fund, buying a house, retirement planning, etc) with the investment strategy baked directly into it where appropriate. The idea is to combine the actual tactical planning actions (what accounts to open, important action dates, tax optimization moves) with the investment management, so it's a truly personalized experience.
Currently in open beta in the US. Any feedback is welcome!
In short, an explorable database of movies, TV shows, books and board games organised around the time and place that they're set. So if you're interested in stuff set during the French Revolution but not in Paris, you could find it there, for instance.
This weekend I’m working on making the parsing more robust. The most common friction I’ve heard is that downloading books elsewhere and importing them into the app is distracting. I’m torn between expanding it to include a peer-to-peer book exchange or turning it into an RSS feed reader.
The stoneware bitrot was legacy but eventually overwhelmed the architecture during an off-peak environment incident.
I'm tasked with fulfilling runtime dependencies to restore the wall framework, but had issues with build time mixing parameters not compiling well with the piecemeal building blocks.
I finally got it up and running through trial and error, though I sense a full rewrite will eventually be needed in the future.
The big thing I wanted to try is automatic global routing via MQTT.
Everything is globally routable. You can roam around between gateway nodes, as long as all the gateways are on the same MQTT server.
And there's a JavaScript implementation that connects directly to MQTT. So you can make a sensor, go to the web app, type the sensor's channel key, and see the data, without needing to create any accounts or activate or provision anything.
Porting a trivia quiz game I wrote 25 years ago from a Java Applet to the modern-day web. Has a scoring mechanism like golf (par-5 harder, par-3 easier, wrong answer choice costs a shot, etc.) Funny how one's code looks the same or different after 25-years!
A couple of other things I can't mention for various reasons... hopefully next time.
I want to workout at least the minimum amount but always end up procrastinating it ... for some fortunate ones (me) it only takes like 20 min. a day to keep a good shape, with stuff you can do at home. We all know this but for many somehow it never happens.
I want to keep a tally of the push ups I do every day (and squats, etc...). I decided to gamify it, but not in a crappy way. I would like to see my streaks (kind of like how Github shows commits) and how other friends are doing.
Right now is prototype v0.0.0.0.0.1 as you can see, no UI and the push up detector actually kind of detects squats, lol, but I'm working on it. Btw, the push up detector is client side only so rest assured I never get see your video.
There's a global push up count, an aggregate of all push ups everyone does on the site, right now is linked to a button so it's more like a clicker, feel free to exercise your fingers. I figured it would be super nice if one day we can do like a million push ups collaboratively, or just looking at it going up in real-time, meaning somebody else is working out, should get myself inspired to do some as well.
Please leave your feedback and yeah you can join the Push Up Club anytime :D.
Lately, I've been hacking on improving its linear algebra support (as that's one of the key focuses I want - native matrix/vector types and easy math with them), which has also helped flush out a bunch of codegen bugs. When that gets tedious, I've also been working on general syntax ergonomics and fixing correctness bugs, with a view to self-hosting in the future.
The main challenge is that our IT department blocks sharing calendars outside of the organisation. While this is primarily a solution for my own problem and likely not valuable to others, you could probably achieve the same result with tools like n8n or IFTTT.
- Wallpaper manager with multi-monitor support and multiple image sources. Change wallpapers daily, hourly, etc.
- Lockscreen image manager with the same modes as the wallpaper feature.
- Screensaver/fullscreen modes manager with many screensaver options and multi-monitor support.
- Custom shortcut menu builder where you can have a custom menu accessible from your tray area.
[0] - https://lumotray.com
I didn't really find the MS store to be hard to use, but it's for sure the weakest of the "app stores" when it comes to discoverability.
That said, it has become a bit better with Windows 11 and as time goes by Windows users are gradually starting to trust MS store apps a bit more.
Cheers
I only recently reached an alpha - and I am looking for testers!
- Alpha screenshot: https://drive.google.com/file/d/1Wi6MqxC17iIzfSL--_nNxHxbID1...
- Rambling why I built it, plus Discord link: https://progress.compose.sh/about
Even though I am not your target audience (linux i3 user myself), I would be interested in knowing how much "hacking" the macOS system is required to do this. Is it hard to get a list of running apps for your Task Bar? Is it hard to list the apps for the menu? How about keeping it all "on top" while other windows e.g. get maximized/minimized/full-screen, etc?
You actually nailed the major pain points. Particularly window focus and state management. I've spent months solving this problem alone.
-
1. Applications data list: Getting the list is easy! Finding out which apps in that list are "real" apps isn't. Getting icons isn't. Reliably getting information on app state isn't. Finding out why something doesn't work right is as painful as can be. Doing all this in a performant way is a nightmare.
2. Applications menu renderer: Rendering the list for the menu is easy enough: the macOS app sends this data via socket. The frontend is just web sockets and web components under the hood (https://lit.dev). The difficult part was converting app icons to PNG, which is awfully slow. So a cache-warmup stage on startup finds all apps, converts their icons to png, and caches them to the app directory for read.
3. Window state: again, by far the worst and it isn't even close. Bugs galore. The biggest issue was overriding macOS core behavior on what a window is, when it's focused, and how to communicate its events reliably to the app. Although I did include a couple private APIs to achieve this, you can get pretty far by overriding Window class types in ways that I don't think were intended (lol). There is trickery required for the app to behave correctly: and the app is deceptively simple at a glance.
-
One bug, and realization, that still makes me chuckle today.. anything can be a window in macOS.
I'm writing this on Firefox now, and if I hover over a tab and a tooltip pops up - that's a window. So a fair amount of time has gone into determining _what_ these apps are doing and why. Then coming up with rules on determining when a window is likely to be a "real" window or not.
The Accessibility Inspector app comes standard on macOS and was helpful for debugging this, but it was a pain regardless.
It's meant to be a 'rails-like' experience in Go without too much magic and conventions.
Basically, speeding up development of fullstack apps in Go using templ, datastar, sqlc with an MVC architecture and some basic generators to quickly setup models, views and controllers.
I’ve been working for the past 3 years on SelfHostBlocks https://github.com/ibizaman/selfhostblocks, making self-hosting a viable and convenient alternative to the cloud for non technical people.
It is based on NixOS and provides a hand-picked groupware stack: user-facing there is Vaultwarden and Nextcloud (and a bunch more but those 2 are the most important IMO for non technical people as it covers most of one’s important data) and on the backend Authelia, LLDAP, Nginx, PostgreSQL, Prometheus, Grafana and some more. My know-how is in how to configure all this so they play nice together and to have backups, SSO, LDAP, reverse proxy, etc. integration. I’m using it daily as the house server, I’m my first customer after all. And beginning of 2025 it passed my own internal checkpoint to be shared with others and there’s a handful of technical users using it.
My goal is to work on this full time. I started a company to provide a white glove installation, configuration and maintenance of a server with SelfHostBlocks. Everything I’ll be doing will always be open source, same as the whole stack and the server is DIY and repair friendly. The continuous maintenance is provided with a subscription which includes customer support and training on the software stack as needed.
I really believe that we're going to see this adopted as a standard for on-demand inference specifically from low powered devices. We'll give them a guide of intent and periodically it will get scored. It just makes sense.
However, sites change and markdown is hard for a lot of people. We take that for granted as technical people.
Continuing working on probe.bike to allow you to tell stories as a user from your bikepacking data. See the full route, share the full route, and figure out your totals from calories to kudos. Gonna be doubling down again on this soon.
It's really hard to understand what 10 gigawatts actually is in terms of GPUs. Or what the individual TDP of your GPU estate is.
I've built flopper to try to act as a translation layer and calculator. Some inspiration from vantage.sh here.
AI is making it possible to moonlight on these projects. Definitely burning candles at both ends though.
It's working well and I think I can use the same "backend" to pull this data into a spreadsheet which could be useful for data hungry users/coaches/club and event organizers/etc.
I'm currently collecting submissions for the first editions. So if you would be interested to feature your smart home, that would be awesome.
Current version is over at https://owl.so but the local first native app (Owl/2) is about to hit beta real soon.
https://github.com/RoyalIcing/Orb
It’s implemented in Elixir and uses its powerful macro system. This is paired with a philosophy of static & bump allocation, so I’m trying to find a happy medium of simplicity with a powerful-enough paradigm yet generate simple, compact code.
Basically, think of it as "Pokemon the anime, but for real". We allow you to use your voice to talk to, command, and train your monster. You and your monster are in this sandbox-y, dynamic environment where your actions have side effects.
You can train to fight or just to mess around.
Behind the scenes, we are converting player's voice into code in real time to give life to these monsters.
If you're interested, reach out!
- A SAAS to help HomeSchool Coop and Collectives build an online presence where they set up member registration, show course offerings, and let parents enroll their students. Main competitor is HomeSchoolLife and this started because our own HS Collective used HSL and hated it; so I thought I'd try. The complexity has grown rapidly, trying to custom roll a CMS because I couldn't get Wagtail to integrate nicely with my site, and Django CMS leverages the Django Admin but I don't want my users to have to deal with that.
- Also playing around with a SAAS targeted at YouTube Car Flippers to be able keep tracking of their current flips and, eventually, research new potential flips.
- I've got plenty of other unfinished projects that I still tinker with; a real-time version of Words with Friends, because I hated the slow asynchronous gameplay. I started it 12 years ago and then life got in the way. I'm guessing WWF probably has this mechanic now; I don't play anymore.
Basically, an agentic platform for working with rich text documents.
I’ve been building this solo since May and having so much fun with it. I created a canvas renderer and all of the word processor interactions from scratch so I can have maximum control over how things are display when it comes to features like AI suggestions and other more novel features I have planned for the future.
He liked what I built for him and I got jealous, so I expanded it with my own profile (Trail running).
Then, I got curious… Could I build a full web platform for people to track their sporting life? I mean we have LinkedIn and CVs for our job career, why not celebrate all our sports/training efforts as well.
After a couple of months on the side, I'm pretty happy with Flexbase. If you're into sports, give it a try and let me know what's missing for you.
Note: it's mobile-only past the front page.
https://flexbase.co/ My profile: https://flexbase.co/athletes/96735493
You can list the sports you're doing or did in your entire life, you can add your PRs, training routines, gear, competition results, photos. You can also list your clubs, and invite/follow your training buddies.
Honestly, I'm not sure where (or if) to expand it... Turn it into a Club-centric tool, make it more into a social network for sporty people.
Lots of ideas, but I'd love to find someone to work on it with me. I find that building alone is less fun.
Thanks for your sporty feedback.
There are few similar projects too, one is itself a startup which sadly on the verge of bankruptcy, and another aggregates only IT-related jobs.
- Getting into RTL SDR, ordered a dongle, should be fun, want to build a grid people can plug into
- Bringing live transcripts, search and AI to wisprnote
- Moving BrowserBox to a binary release distribution channel for IP enforcement and ease of installation. Public repo will no longer be updated except for docs/version/base install script, and all dev happens in internal with binaries released to https://github.com/BrowserBox/BrowserBox. Too many "companies" (even "legit", large ones) abusing ancient forks and stealing our commercial updates without license, or violating previous permissive's conditions like AGPL source provision. Business lesson is even commercial licensed source-available eats into sales pipeline due to violators who could pay but assume false impunity and steal "freebies" "because they can." No perfect protection, but from now enforcement will ramp up, and source access is only for minimum ACV customers as add-on. So many enhancements coming down the pipe so it's gonna be many improved versions from here
- Creating an improved keyboard for iOS swipe typing, I don't like the settings or word choices in ambiguity and think it can be better
Last month:
• wrote my first NEON SIMD code
• implemented adaptive quadrature with Newton–Cotes formulas
• wrote a tiny Markov-chain text generator
• prototyped an interactive pipeline system for non-normalized relational data in Lua by abusing operator overloading
• load-tested and taste-tested primary batteries at loads exceeding those in the datasheet; numerically simulated a programmable load circuit for automating the load testing
• measured the frequency of subroutine calls and leaf subroutine calls in several programs with Valgrind
• wrote a completely unhealthy quantity of commentary on HN
New ideas I'm thinking about include backward-compatible representations of soft newlines in plain ASCII text, multitouch calculators supporting programming by demonstration, virtual machines for perfectly reproducible computations, TCES energy storage for household applications beyond climate control such as cooking and laundry, canceling the harmonic poles of recursive comb filters with zeroes in the nonrecursive combs of a Hogenauer filter, differential planetary transmissions for compact extreme reductions similar to a cycloidal drive, rapid ECM punching in aluminum foil, air levigation of grog, ultra-cheap passive solar thermal collectors, etc. Happy to go into more detail if any of these sound interesting.
Right now I am getting my first users and already getting great feedback. Many things on the roadmap.
Always eager to learn more about others pain points when it comes to React Native/mobile development. Let me know what you think!
I always learned programming and maths on my own so any advice is welcome!
I'm focusing on the South African market (I know a bunch of agents and I've noticed an increase of very obviously ChatGPT generated images on property listing websites)
My hope is that I can own a nice slice of the listing marketing workflow:
- Creating great, but realistic staging images for listing websites - Improve property descriptions and copy
There is some early interest from agents, hoping to start marketing properly this week!
Since the last month, I have created a complete schematic with Circuitscript, exported the netlist to pcbnew and designed the PCB. The boards have been produced and currently waiting for them to be delivered to verify that it works. Quite excited since this will be the first design ever produced with Circuitscript as the schematic capture tool!
The motivation for creating Circuitscript is to describe schematics in terms of code rather than graphical UIs after using different CAD packages extensively (Allegro, Altium, KiCAD) in the past. I wanted to spend more time thinking about the schematic design itself rather than fiddling around with GUIs.
The main language goals are to be easy to write and reason, generated graphical schematics should be displayed according to how the designer wishes so (because this is also part of the design process) and to encourage code reuse.
Please check it out and I look forward to your feedback, especially from electronics designers/hobbyists. Thanks!
I started using it like tool call in Security scanning (think of something like claude-code for security scanning)
Give it a read if you're interested:
https://codepathfinder.dev/blog/codeql-oss-alternative/
https://codepathfinder.dev/blog/introducing-secureflow-cli-t...
Happy to discuss!
I haven't used Claude Code, but recently switched to OpenCode. My token usage and cost is a lot higher, I'm not sure why yet, but I suspect Aider's approach is much more lean.
Right now it connects to local and remote databases like SQLite and Postgres, lets you browse schemas and tables instantly, edit data inline, and create or modify tables visually. You can save and run queries, generate SQL using AI, and import or export data as CSV or JSON. There’s also a fully offline local mode that works great for prototyping and development.
One of the more unique aspects is that DB Pro lets you download and run a local LLM for AI-assisted querying, so nothing ever leaves your machine. You can also plug in your own cloud API key if you prefer. The idea is to make AI genuinely useful in a database context — helping you explore data and write queries safely, not replacing you.
The next big feature is a Visual Query Builder with JOIN support that keeps the Visual, SQL, and AI modes in sync. After that, I’m working on dashboards, workflow automation, and team collaboration — things like running scripts when data changes or sharing queries across a workspace.
The goal is to make DB Pro the most intuitive way to explore, query, and manage data — without the usual enterprise clutter. It’s still early, but it’s already feeling like the tool I always wanted to exist.
You can see it here: https://dbpro.app
Would love to hear feedback, especially from people who spend a lot of time in database clients — what’s still missing or frustrating in the current landscape?
* Velo - Postgres with instant branching (https://github.com/elitan/velo)
* Terra - Declarative schema management for Postgres (https://github.com/elitan/terra)
Some fun side projects i hack on during the evenings and weekends.
Live demo: https://play.tirreno.com/login (admin/tirreno)
The Pain Point: If you are analyzing a large YouTube channel (e.g., for language study, competitive analysis, or data modeling), you often need the subtitle files for 50, 100, or more videos. The current process is agonizing: copy-paste URL, click, download, repeat dozens of times. It's a massive time sink.
My Solution: YTVidHub is designed around bulk processing. The core feature is a clean interface where you can paste dozens of YouTube URLs at once, and the system intelligently extracts all available subtitles (including auto-generated ones) and packages them into a single, organized ZIP file for one-click download.
Target Users: Academic researchers needing data sets, content creators doing competitive keyword analysis, and language learners building large vocabulary corpora.
The architecture challenge right now is optimizing the backend queuing system for high-volume, concurrent requests to ensure we can handle large batches quickly and reliably without hitting rate limits.
It's still pre-launch, but I'd love any feedback on this specific problem space. Is this a pain point you've encountered? What's your current workaround?
I haven't upgraded to bulk processing yet, but I imagine I'd look for some API to get "all URLs for a channel" and then process them in parallel.
You've basically hit on the two main challenges:
Transcription Quality vs. Official Subtitles: The Whisper approach is brilliant for videos without captions, but the downside is potential errors, especially with specialized terminology. YTVidHub's core differentiator is leveraging the official (manual or auto-generated) captions provided by YouTube. When accuracy is crucial (like for research), getting that clean, time-synced file is essential.
The Bulk Challenge (Channel/Playlist Harvesting): You're spot on. We were just discussing that getting a full list of URLs for a channel is the biggest hurdle against API limits.
You actually mentioned the perfect workaround! We tap into that exact yt-dlp capability—passing the channel or playlist link to internally get all the video IDs. That's the most reliable way to create a large batch request. We then take that list of IDs and feed them into our own optimized, parallel extraction system to pull the subtitles only.
It's tricky to keep that pipeline stable against YouTube’s front-end changes, but using that list/channel parsing capability is definitely the right architectural starting point for handling bulk requests gracefully.
Quick question for you: For your analysis, is the SRT timestamp structure important (e.g., for aligning data), or would a plain TXT file suffice? We're optimizing the output options now and your use case is highly relevant.
Good luck with your script development! Let me know if you run into any other interesting architectural issues.
The biggest challenge with this approach is that you probably need to pass extra context to LLMs depending on the content. If you are researching a niche topic, there will be lots of mistakes if the audio isn't if high quality because that knowledge isn't in the LLM weights.
Another challenge is that I often wanted to extract content from live streams, but they are very long with lots of pauses, so I needed to do some cutting and processing on the audio clips.
In the app I built I would feed an RSS feed of video subscriptions in, and at the other end a fully built website with summaries, analysis, and transcriptions comes out that is automatically updated based on the youtube subscription rss feed.
You've raised two absolutely critical architectural points that we're wrestling with:
Official Subtitles vs. LLM Transcription: You are 100% correct about auto-generated subs being junk. We view official subtitles as the "trusted baseline" when available (especially for major educational channels), but your experience with Gemini confirms that an optimized LLM-based transcription module is non-negotiable for niche, high-value content. We're planning to introduce an optional, higher-accuracy LLM-powered transcription feature to handle those cases where the official subs don't exist, specifically addressing the need to inject custom context (e.g., topic keywords) to improve accuracy on technical jargon.
The Automation Pipeline (RSS/RAG): This is the future. Your RSS-to-Website pipeline is exactly what turns a utility into a Research Engine. We want YTVidHub to be the first mile of that process. The challenges you mentioned—pre-processing long live stream audio—is exactly why our parallel processing architecture needs to be robust enough to handle the audio extraction and cleaning before the LLM call.
I'd be genuinely interested in learning more about your approach to pre-processing the live stream audio to remove pauses and dead air—that’s a huge performance bottleneck we’re trying to optimize. Any high-level insights you can share would be highly appreciated!
``` stream = ffmpeg.filter( stream, 'silenceremove', detection='rms', start_periods=1, start_duration=0, start_threshold='-40dB', stop_periods=-1, stop_duration=0.15, stop_threshold='-35dB', stop_silence=0.15 ) ```
That specific ffmpeg silenceremove filter is exactly the type of pre-processing step we were debating for handling those massive, lengthy live stream files before they hit the LLM. It's a huge performance bottleneck solver.
We figured ffmpeg would be the way to go, but having your tested parameters (especially the start/stop thresholds) for effective noise removal saves us a massive amount of internal testing time. That's true open-source community value right there.
This confirms that our batch pipeline needs three distinct automated steps:
URL/ID Harvesting (as discussed)
Audio Pre-Processing (using solutions like your ffmpeg setup)
LLM Transcription (for Pro users)
We will aim to make that audio cleaning step abstracted and automated for our users—they won't have to fiddle with parameters; they'll just get a cleaned transcript ready for analysis.
Thanks again for the technical deep dive! This is incredibly helpful for solidifying our architecture.
Your use case confirms that the plain text (TXT) output needs to be highly optimized—meaning we must ensure the final TXT file is as clean as possible:
No empty lines or spurious formatting from the original subtitle file.
No redundant tags (e.g., speaker or color codes).
Just a single, clean block of text ready to be fed into an LLM or analysis script.
We will prioritize making the TXT output option the "cleanest data" choice for users like yourself who are moving the content directly into analysis or RAG systems. This confirms the value of offering both SRT (for video viewing) and TXT (for data analysis).
Let me be 100% transparent: I'm the human founder, but I've been using a language model to help me quickly synthesize and structure these detailed, technical replies, especially while processing all the fantastic feedback (like your ffmpeg script!) and balancing the day-to-day coding.
The goal wasn't to deceive or automate interaction—it was to ensure I could respond to every point with technical clarity without losing the thread, but I clearly over-optimized the structure and lost the necessary human touch. My mistake.
This is a human talking now, hitting reply directly. Your feedback has been invaluable—truly saving us weeks of R&D—and I would never want you to feel that contribution was wasted on a bot.
We are taking your ffmpeg suggestion seriously for the long video pipeline. I'm hitting the keyboard and doing the coding myself.
Thanks for the brutally honest call-out. I'll stick to 100% human responses going forward.
That was a different user[0]; but to be fair that _is_ a human-like mistake to make (especially given HN's UI), so that is - weirdly - endearing :P
But yeah - hopefully this is helpful meta-feedback for you. The "AI tone" is very notable; and when used in (what purports to be) personal communication, it signals disrespect. I totally understand wanting to use tools to collate/summarize/draft; but, until the SotA moves on a _lot_, they can't be trusted for direct replies.
Appreciate the honesty. No hard feelings - keep building, best of luck!
A way to find specific materials would be nice. Think of converting the whole playlist into something like RAG then you can search anything from this playlist.
You hit the nail on the head regarding language support.
Mandarin/Multilingual Support: Absolutely, supporting a wide range of languages—especially Mandarin—is a top priority. Since we focus on extracting the official subtitles provided by YouTube, the language support is inherently tied to what the YouTube platform offers. We just need to ensure our system correctly parses and handles those specific Unicode character sets on the backend. We'll make sure CJK (Chinese, Japanese, Korean) languages are handled cleanly from Day 1.
The RAG/Semantic Search Idea: That is an excellent feature suggestion and exactly where I see the tool evolving! Instead of just giving the user a zip file of raw data, the true value is transforming that data into a searchable corpus. The idea of using RAG to search across an entire playlist/channel transcript is something we're actively exploring as a roadmap feature, turning the tool from a downloader into a Research Assistant.
Thanks for the use case and the specific requirements! It helps us prioritize the architecture.
You can use video understanding from Gemini LLM models to extract subtitles even the video doesn't have official subtitles. That's expensive for sure. But you should provide this option to willing users. I think.
You are 100% right. For the serious user (researcher, data analyst, etc.) the lack of an official subtitle is a non-starter. Relying solely on official captions severely limits the available corpus.
The suggestion to use powerful models like Gemini for high-accuracy, custom transcription is excellent, but as you noted, the costs can spiral quickly, especially with bulk processing of long videos.
Here is where we are leaning for the business model:
We are committed to keeping the Bulk Download of all YouTube-provided subtitles free, but we must implement a fair-use limit on the number of requests per user to manage the substantial bandwidth and processing costs.
We plan to introduce a "Pro Transcription" tier for those high-value, high-volume use cases. This premium tier would cover:
Unlimited/High-Volume Bulk Requests.
LLM-Powered Transcription: Access to the high-accuracy models (like the ones you mentioned) with custom context injection, bypassing the "no official subs" problem entirely—and covering the heavy processing costs.
We are currently doing market research on fair pricing for the Pro tier. Your input helps us frame the value proposition immesnely. Thank you for pushing us on this critical commercial decision!
https://github.com/jakeroggenbuck/kronicler
This is why I wrote kronicler to record performance metrics while being fast and simple to implement. I built my own columnar database in Rust to capture and analyze these logs.
To capture logs, `import kronicler` and add `@kronicler.capture` as a decorator to functions in Python. It will then start saving performance metrics to the custom database on disk.
You can then view these performance metrics by adding a route to your server called `/logs` where you return `DB.logs()`. You can paste your hosted URL into the settings of usekronicler.com (the online dashboard) and view your data with a couple charts. View the readme or the website for more details for how to do this.
I'm still working on features like concurrency and other overall improvements. I would love some feedback to help shape this product into something useful for you all.
Thanks! - Jake
Posting this as maybe it will help other orgs out there that are looking for SAST and want to do it cheaply. https://github.com/jdubansky/sassycode
now the foundation is done, i've learnt a lot. i'm actually eating dog food by using it to track my own classical guitar practice everyday. i am pausing a while to process the requirements by ultra deep thinking to understand what would be helpful and how to shape the product.
LLMs such as codex and claude code definitely helped a lot, but I guess human beings' opinions would be more helpful - after all, the tool is made for humans instead of being used by claude code.
I would also like to hear when you start a project, if you know your audience are not super close to AI, would you still consider to enable the AI feature for them?
https://github.com/westonwalker/primelit
Drawing a lot of inspiration from interval.com. It was an amazing product but was a hosted SAAS. I'm exploring taking the idea to the .NET ecosystem and also making it a Nuget package that can be installed and served through any ASP.NET project.
Currently:
Author profiles and case collections with tags/tools.
Prompt and workflow templates, built-in playgrounds with versioning.
Search by tasks and tools, drafts and publications.
Next on the roadmap:
Smart marketing prompt-apps for media: monetization, ratings, A/B metrics, cost/quality.
Auto-generation of wrappers for different LLM providers and presets for editorial workflows.
Forks and case collaborations, reproducibility analytics.
- 30k requests/month for free
- simple, stable, and fast API
- MCP Server for AI-related workloads
The tool is free and open-source, so anyone can use it for their own Kafka clusters. It’s very much a “crappy but functional” project at this stage — nothing fancy, just a practical CLI for certain tasks.
You can find it here: https://github.com/KLogicHQ/kafy
I am overengineering a simulation-based solution to this because I think there are scenarios based on cup shapes and environmental temperatures that allow either answer to be true. This will end up as a blog post I guess.
iOS/Mac app for learning Japanese by reading, all in one solution with optional Anki integration
I went full-time on this a couple years ago. I’m now doing a full iOS 26 redesign, just added kanji drawing, and am almost done adding a manga mode via Mokuro. I’m also preparing influencer UGC campaigns as I haven’t marketed it basically at all yet.
The first is a DNS blocker called Quietnet - https://quietnet.app. Its born out of my interest in infrastructure and I wanted to build an opininated DNS blocker that helps mom and pops be safer on the Internet. At the end of the day its just the typical Pi-hole on the Cloud but with my personal interest in providing stronger privacy for our users while keeping their families safe.
The second, is a small newsletter aggregator tool called Newsletters.love - https://newsletters.love/.
I wanted to create a way for people to start curating their own list of newsletters and then sharing them with their friends and families. The service helps to generate a private email adddress that they can use to subscribe to newsletters and then start reading those newsletters whenever they want without it getting lost in their email inbox.
After spending so much of my career dealing with APIs and building tooling for that I feel there's huge gap between what is needed and possible vs how the space generally works. There's a plethora of great tools that do one job really well, but when you want to use them the integration will kill you. When you want to get your existing system in them it takes forever. When you want to connect those tools that takes even longer.
The reality I'm seeing around myself and hearing from people we talk to is that most companies have many services in various stages of decay. Some brand new and healthy, some very old, written by people who left, acquired from different companies or in languages that were abandoned. And all of that software is still generating a lot of value for the company and to be able to leverage that value APIs are essential. But they are incredibly hard and slow to use, and the existing tools don't make it easier.
It’s the beginnings of a web-based “choose your own adventure” book builder.
I’m sure other such things exist, but I wanted to build one with a focus on a natural authoring process - no HTML or Markdown, just one plain text so you can quickly type your way to fun.
The primary users (both for authoring and playing the books) will be my kids.
I’m also getting a kick out of building it with zero dependencies (I’m not counting the linter/formatter, Biome) in JavaScript), especially enjoying writing tests with just `node:assert`.
I’m planning on making it themeable in the style of CSS Zen Garden - I.e. you can’t change the generated HTML but you can style it with CSS however you like.
The platform also supports HR for the organization by presenting in-depth anonymized data surrounding team interactions, exceptional individuals, and potential bottlenecks within the organization caused by qualitative issues. Aiming to launch by end of year and working with small businesses as free test users for feedback and validation.
The goal is to provide a fully typed nodeJS framework that allows you to write a typescript function once and then decide whether to wire it up to http, websocket, queues, scheduled tasks, mcp server, cli and other interactions.
You can switch between serverless and server deployments without any refactoring / completely agnostic to whatever platform your running it on
It also provides services, permissions, auth, eventhub, advanced tree shaking, middleware, schema generation and validation and more
The way it works is by scanning your project via the typescript compiler and generating a bootstrap file that imports everything you need (hence tree shaking), and allows you to filter down your backend to only the endpoints needed (great to pluck out individual entry points for serverless). It also generates types fetch, rpc, websocket and queue client files. Types is pretty much most of what pikku is about.
Think honoJS and nestJS sort of combined together and also decided to support most server standards / not just http.
Website needs love, currently working on a release to support CLI support and full tree shaking.
It clearly supports different runtimes than node with different capabilities and limitations.
It seems more of a runtime-agnostic web server.
I agree framing pikku has been a pretty hard challenge for me.
It supports different runtimes in the sense of deno / bun or custom nodeJS runtimes in the cloud, but ultimately relies purely on typescript / a JavaScript compatible backend.
It’s less of a webserver and more of a lightweight framework though, since it also supports CLIs or Frontend SDKs / isn’t tied to running an actual server.
https://github.com/bobjansen/mealmcp
There is a website too so you don’t actually need to use MCP:
It’s one command that lets you boot Linux on other computers via LAN. Cross platform, rootless
I think I’ve figured out a way to make a pxehost app for mobile devices, so you can boot Linux installers with an app on your phone
Server-side analysis using to flag conversations and video feed analysis is in the works... far for being perfect.
The user-driven reporting system has been challenging because I have not find a way to validate the report.
I am striving for balance... not too strict and also allowing to be spontaneous. It's a fascinating and incredibly difficult problem.
There are other learning projects that are not in a shareable state, but help me to focus on something else when I am overwhelmed.
-----
COCKTAIL-DKG - A distributed key generation protocol for FROST, based on ChillDKG (but generalized to more elliptic curve groups) -- https://github.com/C2SP/C2SP/pull/164 | https://github.com/C2SP/C2SP/issues/159
-----
A tool for threshold signing software releases that I eventually want to integrate with SigStore, etc. to help folks distribute their code-signing. https://github.com/soatok/freeon
-----
Want E2EE for Mastodon (and other ActivityPub-based software), so you can have encrypted Fediverse DMs? I've been working on the public key transparency aspect of this too.
Spec: https://github.com/fedi-e2ee/public-key-directory-specificat...
Implementation: Coming soon. The empty repository is https://github.com/fedi-e2ee/pkd-server-go but I'll be pushing code in the near future.
You can read more about this project here: https://soatok.blog/category/technology/open-source/fedivers...
https://codeberg.org/Timwi/JigGen
The newest addition is a hexagonal piece cut, bringing the number of built-in geometries to 5.
I’ve spent a while understanding what sort of market would make it viable. I think it does actually work if you can square: 10K participants per major metro area, revenue of about 2.9M per metro area (so say, 5K monthly recurring with about 50 customers).
At that point you could pay data union participants about $5 a week to share their location data with you.
From talking to some previous data union folks, the major challenges are paying out (my target is much higher than any union managed), and people dropping out over time.
My bet is that these are both solvable things by selling data products rather than just bundles of data, and the data source being very passive.
I’m also interested in the idea that such a union should act more like a union than previous efforts in this space, by actively defending members’ data from brokers.
https://jsassembler.fly.dev/ https://csharpassembler.fly.dev/ https://goassembler.fly.dev/ https://rustassembler.fly.dev/ https://nodeassembler.fly.dev/ https://phpassembler.fly.dev/
The purpose is to find if can i build declarative software in multiple langauges (Rust, Go, Node.Js, PHP and Javascript) knowing only one language (C#) without understanding the implementation deeply.
Another purpose is validate AI models and their efficiency since development using AI is hard but highly productive and having a declarative rules to recreate the implementation may be used to validate models
Currently i am convinced it is possible to build, but now working on creating a solid foundation with tests of the two assembler engines, structure dumps, logging, logging outputs so that those can be used by the AI which it needs to fix issues iteratively.
Need to add more declarative rules and implement a full stack web assembler to see if AI will hit the technical debt which slows/stop progress. Only time will tell.
- great idea / app
- I am never gonna use it
- I bet shit-ton of people still use checks for something and this is cool for them
The idea is that a beginner should be able to wire up a personally useful agent (like a file-finder for your computer) in ten minutes by writing a simple prompt, some simple tools, and running it. Easy to plugin any kind of tracing, etc you want. Have three or four projects in prod which I'll be switching to use it just to make sure it fits all those use-cases.
But I want to be able to go from someone saying "can we build an agent to" to having the PoC done in a few minutes. Everything else I've looked at so far seems limited, or complicated, or insufficiently hackable for niche use-cases. Or, worse of all, in Python.
It helps you monitor metrics, logs, and consumer behavior in real time.
Check it out: https://klogic.io
Book a demo: https://klogic.io/request-demo/
Features:
- Message inspection from any topic — trace and analyze messages, view flow, lag, and delivery status
- Anomaly detection & forecasting — predict lag spikes, throughput drops, and other unusual behaviors
- Real-time dashboards for brokers, topics, partitions, and consumer groups
- Track config changes across clusters and understand their impact on performance
- Interactive log search with filtering by topic, partition, host, and message fields
- Build custom dashboards & widgets to visualize metrics that matter to your team
What pain points do you face in monitoring Kafka, which features would you like next, and any improvements to dashboards, log search, or message inspection?
We will add the screenshots.
[1] https://nid.nogg.dev [1] https://mood.drone.nogg.dev
Also working on a youtube channel [3] for my climbing/travel videos, but the dreary state of that website has me wondering whether it's worth it, tbh. I haven't been able to change my channel name after trying for weeks. It's apparently the best place to archive edited GoPro footage at least.
* LLMs are accessible where telegram is accessible
* Multitude of models to choose from (chatgpt, claude, gemini) and more is coming.
* Full control over the bot behaviour is in user's hands: I don't add any system messages or temperature/top_p. I give UI for full control over system messages, temperature, top_p, thinking, web searching/scrapping and more to come.
* Q/A like context handling. Context is not carried through the whole bot, it's rather carried through chaing of replies. Naturally could be branched or use various models cross messages.
--
This is my hobby project and one of main tools for working with LLMs, thus I'm going to stick to it for quite a while.
An always up-to-date contact book.
Share as many or as few of your contact details with another person via a single link.
When you update a contact detail, those you have shared it with see the update.
Did you move? Your whole family can see the new address.
I am working on the ability for you to share contact with businesses. The dream is that I can update my address in one place then my CC provider and every subscription using that credit card will have my new billing address automagically.
I noticed a gap - our customers are required to upload sensitive documents but often hesitate at the thought of uploading documents in the intercom/crisp interface, citing privacy concerns.
I thought, how difficult would it be to build an app that sends documents to your own Google drive - turns out it’s very easy. In a week, we built an app that renders an iframe in the intercom chat interface and sends documents straight to our google drive folder, bypassing intercom all together.
We’re now investigating uploading to s3 or azure blob storage and generating summaries of documents that are sent to the intercom conversation thread so ops teams can triage quicker.
Let me know what you think!
Write a dev blog in Word format using Tritium, jot down bugs or needs, post blog, improve and repeat.
The main pitch is you have minimal dependencies and overheads and can run tests natively on pandas/polars/pyspark/dask/duckdb/etc (thanks to the awesome Narwhals project)
It's mostly there for v1 right now, but kean to add a tiny bit more functionality, and well a lot more docs. Working on something that's automated alongside the test suite, which should keep things reliable and fresh (I'll find out soon enough)
[0] https://github.com/benrutter/wimsey / https://codeberg.org/benrutter/wimsey
It's an AI-webapp builder with a twist: I proxy all OpenAI API calls your webapp makes and charge 2x the token rate; so when you publish your webapp onto a subdomain, the users who use your webapp will be charged 2x on their token usage. Then you, the webapp creator, gets 80% of what's left over after I pay OpenAI (and I get 20%).
It's also a fun project because I'm making code changes a different way than most people are: I'm having the LLM write AST modification code; My site immediately runs the code spit out by the LLM in order to make the changes you requested in a ticket. I blogged about how this works here: https://codeplusequalsai.com/static/blog/prompting_llms_to_m...
Financial institutions and governments don’t spot crime because of incomplete information at individual firms. We help them understand federated learning and how to effectively collaborate and not just talk about it. All code is open source, so you can always help out ;-)
Some industry players are coming around: https://www.swift.com/news-events/press-releases/swift-ai-in...
lpviz is like Desmos, but for linear programming - I've implemented a few LP solvers in Typescript and hooked them up to a canvas so you can draw a feasible region, set an objective direction, and see how the algorithms work. And it all runs locally, in the browser!
If you go to https://lpviz.net/?demo it should show you a short tour of the features/how to use it.
It's by no means complete but I figured there may be some fellow optimization enthusiasts here who might be interested to take a look :) Super open to feedback, feature requests, comments!
For a 2-min intro to LP, I recommend https://www.youtube.com/watch?v=7ZjmPATPVzI
- What: Sun Grid Engine–style scheduler + Docker on System-on-Module (SoM) boards for reproducible tests/benchmarks and interactive SSH sessions (remote dev).
- Who: Robotics/embedded engineers comparing SoMs and tuning models/pipelines on target platforms.
- Why: Reproducible runs, easy board access, comparable reports.
Pulled this side project off the shelf — something I started after covid, when I was working at one of the consumer robotics companies (used to be the largest back then). Got it mostly working, but never actually released. I tend to dust it off and push it along a bit whenever I’m between jobs. Like now...
Feels good to be back at it.The basic idea is that integrating business data into a B2B app or AI agent process is a pain. On one side there's web data providers (Clearbit, Apollo, ZoomInfo) then on the other, 150 year old legacy providers based on government data (D&B, Factset, Moody's, etc). You'd be surprised to learn how much manual work is still happening - teams of people just manually researching business entities all day.
At a high level, we're building out a series of composable deep research APIs. It's built on a business graph powered by integrations to global government registrars and a realtime web search index. Our government data index is 265M records so far.
We're still pretty early and working with enterprise design partners for finance and compliance use cases. Open to any thoughts or feedback.
Fitness Tools https://aretecodex.pages.dev/tools/
Fitness Guides https://aretecodex.pages.dev/
A lot of people often ask questions like: - How do I lose body fat and build muscle? - How can I track progress over time? - How much exercise do I actually need? - What should my calorie and macro targets be?
One of the most frequently asked questions in fitness forums is about cutting, bulking, or recomposition. This tool helps you navigate those decisions: https://aretecodex.pages.dev/tools/bulk-cut-recomposition-we...
We’ve also got a Meal Planner that generates meal ideas based on your calorie intake and macro split: https://aretecodex.pages.dev/tools/meal-plan-planner
Additionally, I created a TDEE Calculator designed specifically to prevent overshooting TDEE in overweight individuals: https://aretecodex.pages.dev/tools/tdee-calculator
For a deeper dive into the concept of TDEE overshoot in overweight individuals, check out this detailed post: https://www.reddit.com/r/AskFitnessIndia/comments/1mdppx5/in...
That’s why I’ve been building 'Fragno', a framework for creating full-stack libraries. It allows library authors to define backend routes and provides reactive primitives for building frontend logic around those routes. All of this integrates seamlessly into the user’s application.
With this approach, providers like Stripe can greatly improve the developer experience and integration speed for their users.
I was tired of only having 1 or 2 things per newsletter that interested me, multiplied by however many newsletters I've subscribed to. Trying to solve that.
The idea: design newsletter sections on whatever topics you want (football scores, tech news, new restaurants in your area, etc.), choose your tone and length preferences, then get a fully cited digest delivered weekly to your inbox. Completely automated after initial setup (but you can refine it anytime).
Have the architecture sorted and a pretty good dev plan, but collecting interest before I invest a ton of time into it.
If you feel this pain too, waitlist is here: https://www.conflio.app/
(Or maybe I'm just too lazy about staying informed haha)
It’s minimal in design, but packed with features.
The USP that customers seem to really value is posting by email. It massively reduces the friction required to blog and is surprisingly enjoyable.
Launching next week is custom home pages with dynamic variables. It’s in beta already, see https://iamgregb.io.
Pagecord is free and source available with an unbeatably priced premium plan of $29/year.
Follow along on GitHub: https://github.com/lylo/pagecord
Feedback welcome! :)
This all started when I was first learning the language and was having a lot of trouble understanding different dialects, I want to make sure this is not a problem for future learners!
It's my first major project so it has been quite the learning experience, especially when it comes to infrastructure/hosting. The project is still pretty early on but I hope to show it off soon.
The solution? Have the cartridge keep track of CPU parity (there's no simple way to do this with just the CPU), then check that, skip one cycle if needed... and very carefully cycle time the rest of the routine, making sure that your reads land on safe cycles, and your writes land in places that won't throw off the alignment.
But it works! It's quite reliable on every console revision I've thrown it at so far. Suuuper happy with that.
My daughter loves stories, and I often struggled to come up with new ones every night. I remember enjoying local folk tales and Indian mythological stories from my childhood, and I wanted her to experience that too — while also learning new things like basic science concepts and morals through stories.
So I built Dreamly and opened it up to friends and families. Parents can set up their child’s profile once - name, age, favorite shows or characters, and preferred themes (e.g. morals, history, mythology, or school concepts). After that, personalized stories are automatically delivered to their inbox every night. No more scrambling to think of stories on the spot!
I also like making up stories when we go on hikes. Long, rambling stories about unicorns befriending spiders and flying to faraway lands.
If you zoom out it's meant to look something like a thermal vent with cellular life. Rank and karma cause the cells to bio-illuminate. Each cell is a submission, each organelle is a comment thread, and every shape represents a live connection to the Firebase HN API. It also has features to search, filter, and go back in time as far as the backend has been running.
It's been a passion project of mine. My little Temple OS. And I'll keep adding little features that please me.
https://hackernews.life/?s=top&id=45561428&c=0&t=1760303616
You can press the fast forward button or drag the slider to the right watch it evolve.
Nice to call it feature complete and move on!
Hoping to make the best typing application.
Key features:
- absolutely no ads; this an education tool
- SmartPractice: we use a time series db to track weakpoints (bigrams, individual characters, etc. their time to type and errors) and then select the most relevant ones to generate typing practice text
- advanced stats; after each session we provide the most detailed stats than any typing platform
This a passion project that turned to a business.
Hope you try it :)
It's basically a reverse-proxy-as-a-service. I handle TLS termination and cert management, offer routing rules, rate limiting, WAF + DDOS protection, proxy + web analytics, redirects etc. All accessible via very simple API.
Underneath it's Caddy hosted on AWS for proxy fleets, and Heroku for Web + API fleets.
Any feedback is welcome!
I am currently developing a web app consisting of a spring/kotlin backend for an angular frontend that is meant to provide a UI for kubectl. It has oAuth login and allows you to store several kubernetes configs, select which one to use and makes it unnecessary to remember all the kubectl commands I can never remember.
It's what I'd like to have if I had to interact with a kubernetes cluster at work. Yes, I know there are several kubernetes UIs already, but remember, this is for 1) learning and 2) following through and completing a project at least somewhat.
https://github.com/skanga/Conductor
Conductor is a LLM agnostic framework for building sophisticated AI applications using a subagent architecture. It provides a robust platform for orchestrating multiple specialized AI agents to accomplish complex tasks, with features like LLM-based planning, memory persistence, and dynamic tool use.
It provides a robust and flexible platform for orchestrating multiple specialized AI agents to accomplish complex tasks. This project is inspired by the concepts outlined in "The Rise of Subagents" by Phil Schmid at https://www.philschmid.de/the-rise-of-subagents and it aims to provide a practical implementation of this powerful architectural pattern.
I regularly browse Reddit (and Hacker News) to keep up with new trends and research topics, but it's really time-consuming.
- It’s hard to find the right communities. Search and recommendation features aren’t quite there, and I don’t want to just passively scroll a feed.
- Going through all the comments takes too long. I just want to quickly grasp the main points people are making. If interested, I can dive in further.
So I started this project to help streamline that process—kind of like a “deep research” workflow for my own browsing.
It’s still early, but it’s already saving me time. If anyone knows of similar tools out there, I’d love to hear about them.
But I went in a different direction, it is a mix of RSS reader with summarization. https://rss.sabino.me/
It is open source, and hosted for free on github pages, so you can customize the feeds and reddit communities.
There is also a configuration ready to use the locall llama from github build system, so you dont have to rely on paying for AI services
It's for doing realtime "human cartography", to make maps of who we are together in complex large-scale discourse (even messy protest).
https://patcon.github.io/polislike-human-cartography-prototy...
Newer video demo: https://youtu.be/C-2KfZcwVl0
It's for exploring human perspective data -- agree, disagree, pass reactions to dozens or hundreds of belief statements -- so we can read it as if it were Google Maps.
My operating assumption is that if a critical mass of us can understand culture and value clashes as mere shapes of discourse, and we can all see it together, the we can navigate them more dispassionately and with clear heads. Kinda like reading a map or watching the weather report -- islands that rise from oceans, or plate tectonics that move like currents over months, and terraform the human landscape -- maybe if we can see these things together, we'll act less out of fear of fun-house caricatures. (E.g., "Hey, dad, it seems like the peninsula you're on is becoming a land bridge toward the alt right corner. I feel a little bummed about that. How do you feel about it?")
(It builds on data and the mathematical primitives of a great tool called Pol.is, which I've worked with for almost a decade.)
Experimental prototype of animating between projections: https://main--68c53b7909ee2fb48f1979dd.chromatic.com/iframe.... (advanced)
Shipping pets and animals across borders is a big problem, and we are building the operating system to solve it at scale. If you are a vet (or work in the veterinary space), we would love to talk to you.
An agent that plugs into Slack and helps companies identify and remediate infrastructure cost-related issues.
Makes it easy to search, favourite and listen to online radio streams.
I like to listen to online radio while working and none of the available web apps I could find hit the nail on the head, so decided to build my own.
- Writing a book about Claude Code, not just for assisted programming, but as a general AI agent framework.
https://github.com/anthropics/claude-agent-sdk-python/commit...
Claude Code used to be a coding agent only, but it transformed into a general AI agent. I want to explore more about that in this book.
There is nothing special comparing to other livechats, the goals is to offer an affordable and unlimited livechat for small projects and companies.
Our company would love a well designed chat button linked to Slack, combined with a helpdesk that supports email queries and also allows people to raise issues via the web.
That’s it, that’s all we need. Happy to pay.
It’s hard to express how badly intercom is designed and engineered. It’s also very expensive and constantly upsold, despite being rubbish. If no one fixes this it will be my next startup.
Too many companies have gone down the road of “AI support”, without understanding that AI must rest on the foundation of great infrastructure. Intercom are pushing their AI so hard it’s absolutely infuriating.
It's a sync infra product that is meant to cut down 6 months of development time, and years of maintenance of deep CRM sync for B2B SaaS.
Every Salesforce instance is a unique snowflake. I am moving that customization into configuration and building a resilient infrastructure for bi-directional sync.
We also recently launched a pretty cool abstraction on top of Salesforce CDC which is notoriously hard to work with: https://www.withampersand.com/blog/subscribe-actions-bringin...
I've been working on my own arrangements, putting chords in lyrics, and the program produces a page with the chord diagrams next to each song. ChordPro is a program that descends from a long lineage of programs that do this, but it's been actively under development in the last 3-4 years. The developer is quite nice, and attends bug reports.
Formo makes analytics and attribution simple for onchain apps. You get the best of web, product, and onchain analytics on one versatile platform.
Have learned a lot about data engineering so far.
Not sure if there's more to say about it right now except that fuzz tests are good for this sort of low level programming with disk layouts involved. They drive up test execution time, but it's still almost hard to build them too early or have too many of them, as there's almost always an unimaginable number of permutations of weird corner cases that are hard to get at with regards to block boundaries and so on that are hard to identify based on staring at the code and doing classic unit tests.
It started as a side-project for my own needs. I wanted to share flight replays of my paragliding tracks on instagram, but there just was no tool for it, so I hacked something together. People seemed to like the resulting videos, so I figured I could build an UI for it and publish it and see what happens :)
Currently my biggest focus is my MUD Server I'm working on. Allows a developer to create a simple MUD game, (locations, items, combat), but all NPCs are actually just LLM controlled MUD clients.
Uses Server-Sent Events for the client + HTTP post for sending actions. Not a traditional direct TELNET style MUD server, but works well in the modern world.
Definitely not 100% hand-coded, probably only around 30% at this point, as I've had my original code refactored and expanded many times by now. It's taught me a lot about managing the agent in agentic-coding.
Lately, I have been considering creating my own as a "something to do in my downtime" project, using Coffee or Evennia, but if you're creating a full server environment that makes rolling out a playable world more convenient, sign me up.
[1] https://www.robinsloan.com/notes/home-cooked-app/ [2] https://booplet.com/blog/anyone-can-cook
Building desktop environment in the cloud with built in cloud storage, AI, processing, app ecosystem and much more!
The target is servers/containers, but I plan to make a gaming/desktop release as well.
No site yet, but I do have a minimal rootfs that mostly works. It could be considered an alternative to Alpine, but more minimalist in the userland while maximalist as regards the kernel and firmware. Also, it's source based and far more opinionated.
This is entirely just for fun, and I do not have any expectation that anyone will want to use it besides myself.
It’s designed to plug into frameworks like CrewAI, AutoGen, or LangChain and help agents learn from both successful and failed interactions - so instead of each execution being isolated, the system builds up knowledge about what actually works in specific scenarios and applies that as contextual guidance next time. The aim is to move beyond static prompts and manual tweaks by letting agents improve continuously from their own runs.
Currently also working on an MCP interface to it, so people can easily try it in e.g. Cursor.
The goal is to catch vulnerabilities early in the SDLC by running agentic loop that autonomously hunt for security issues in codebases.Currently available as a CLI tool, VSCode extension.I've been actively using to scan WordPress, odoo plugins and found several privilege escalation vuln. I have documented as blog post here: https://codepathfinder.dev/blog/introducing-secureflow-cli-t...
Our approach is to make the complexity more readable by using three simple block types to represent logic, data, and UI, which are connected by cables – a bit like wiring up components on an electronics breadboard –.
Instead of spitting out a wall of code, the AI generates these visual blocks and makes the right connections between them. The ultimate goal is to make the output from LLM more accessible and actionable for everyone, not just developers.
It’s a simple NPM package that produces colorful avatars from input data to aid with quick visual verification. I’d like to see it adopted as a standard.
I have been working on a one week side-project that ended up taking over a year… Working on it periodically with friends to add new features and patch bugs, at the moment I'm trying to expand the file sharing capabilities. It's been a journey and I have learnt quite a lot.
The aim of this is to be a simple platform to share content with others. Appreciate any feedback, this is my first time building a user facing platform. If the free tier is limiting, I've made a coupon "HELLOWORLD" if you want to stress test or try the bigger plans, it gives you 100% off for 3 months.
It's already working, and slightly faster than the CPU version, but that's far from an acceptable result. The occupancy (which is a term I first learned this week) is currently at a disappointing 50%, so there's a clear target for optimisation.
Once I'm satisfied with how the code runs on my modest GPU at home, the plan is to use some online GPU renting service to make it go brrrrrrrrrr and see how many new elements I can find in the series.
Imagine your basic Excel spreadsheet -> generating document files, but add:
- Other sources like SQL queries
- User form (e.g. "Generate documents for Client Category [?]")
- Chaining sources in order like SQL queries with parameters based on the user form
- Split at multiple points (5 records in a csv, 4 records in a sql result = 20 generated documents)
- Full Jinja2 templating with field substitution but also if/for blocks that works nicely with .docx files
- PDF output
- output file names using the same templating: "/BusinessDrive/{{ client_id }}/Invoice - {{ invoice_id}}.pdf"
All saved in reproducible workflows (for example if you need to process a .csv file you receive each morning)
Also been doing small little prototypes with cursor/claude for a game I'd love to tinker on more.
https://prototype-actions.prefire.app/
https://prototype-fov.prefire.app/
It's quite an interesting process to vibe code game stuff where I have a vague concept of how to achieve things but no experience/muscle memory with three.js & friends.
Currently it looks like this:
- code editor directly in the browser
- writes to your local file system
- UI-specific features built into the editor
- ways to edit the CSS visually as well as using code
- integrated AI chat
But I have tons of features I want to add. Asset management, image generation, collaborative editing, etc.It's still a prototype, but I'm actively posting about it on twitter as I go. Soon, I'll probably start publishing versioned builds for people to play with: https://x.com/danielvaughn
The goal was to make the learning material very malleable, so all content can be viewed through different "lenses" (e.g. made simpler, more thorough, from first principles, etc.). A bit like Wikipedia it also allows for infinite depth/rabbit holing. Each document links to other documents, which link to other documents (...).
I'm also currently in the middle of adding interactive visualizations which actually work better than expected! Some demos:
From there, users can either send funds to another wallet or spend directly using a pre-funded debit card. It’s still early, but we’re testing with a small group of users who want to receive payments faster and avoid PayPal or wire fees.
If you’re a freelancer or digital nomads interested in trying it out, you can check it out here: https://useairsend.com
That main usecase is done. I’m now focusing on travel guides for remote workers. Goal is to help those new to a country to become as productive as they would be at home within 2-3 hours upon landing at the airport. I completed 80% of a guide to South Korea.
I started working on these guides after my friends in Tokyo commented during our last co-working session on how fast I got to our favourite spot (Tokyo Innovation Base) from Narita Airport; they thought I was already in-town.
We are in it for long term. Not a startup, not looking for investment. Just plain paid product (free while in beta) by a few people. We have a few active users, and are looking for more before we remove the beta label :) It's a PWA app. Currently targeted for desktops. For personal software, I think local-first makes a lot of sense.
You can read more about it and watch a demo: https://blog.with.audio/posts/web-reader-tts
I buit this to get some traffic to my main project's website using a free tool people might like. The main project: https://desktop.with.audio -> a one time payment text to speech app with text highlighting and export mp3 and other features on MacOS (ARM only) and Windows.
I just took Qwen-Image and Google’s image AIs for a spin and I keep a side by side comparison of many of them.
https://generative-ai.review/2025/09/september-2025-image-ge...
and I evaluated all the major 3D Asset creators:
https://generative-ai.review/2025/08/3d-assets-made-by-genai...
It’s been a fun, practical way to continuously evaluate the latest models two ways - via coding assistance & swapping between models to power the conversational AI voice partner. I’ve been trying to add one big new feature each time the model generation updates.
The next thing I want to add is a self improving feedback loop where it uses user ratings of the calls & evaluations to refine the prompts that generate them.
Plus it has a few real customers which is sweet!
I'm trying to use this to create stories that would be somewhat unreasonable to write otherwise. Branching stories (i.e., CYOA), multiperspective stories, some multimedia. I'm still trying to figure out the narrative structures that might work well.
LLMs can overproduce and write in different directions than is reasonable for a regular author. Though even then I'm finding branching hard to handle.
The big challenges are rhythm, pacing, following an arc. Those have been hard for LLMs all along.
https://github.com/rumca-js/Internet-Places-Database
Still crawling framework
https://github.com/rumca-js/crawler-buddy
Still RSS client
Basically the title explains it, I challenged myself to making a chrome extension a day for a month. I've been posting my progress on reddit, and my first two extensions have just been accepted to the chrome store (I'm only done day 3 so far, those were quick reviews!). For those interested:
Day 1: Minimal Twitter
Day 2: No Google AI Overview in Google Search
Day 3: No Images Reddit (Not Published, yet!)
I'm posting daily, I would love to hear thoughts on the extensions!!
We're pretty jazzed.
You define resources needed for activity, time per activity, dependencies between activities to complete a process.
After you input the process you want to complete, you get a schedule similar to a gantt chart.
System displays which activities should be ongoing at any moment, you click gui or call API to complete the activities.
After process is complete you get a report of delays and deviations by Takts, activities and resources.
Based on that report you can decide what improvements to make to your process.
They mostly work already, would appreciate testing from anyone who already has a larger, real-world Litestream v0.5.0 setup running.
https://fly.io/blog/litestream-revamped/#lightweight-read-re...
https://github.com/ncruces/go-sqlite3/tree/litestream/litest...
Just added health inspection data from countries that have that in open datasets (UK and Denmark). If anyone know of others I'd be appreciative of hints.
Thinking of focusing on another idea for the rest of the year, have a rough idea for a map based ui to structure history by geofences or lat / lng points for small local museums
Still reducing design costs of a micro positing stage for hobbyists. I observed the driver motion was mostly synchronous and symmetric... Accordingly, given the scale only a single multiplexed piezoelectric actuator motor driver was actually needed, and cut that part of the design cost by 75%.
Still designing various test platforms to validate other key technologies. Sorry, no spoilers =3
It's called lazyslurm - https://github.com/hill/lazyslurm
Would love feedback! <3
A scanner for pilots to convert handwritten flight logs to CSV files: https://apps.apple.com/us/app/flightlogscan/id6739791303
And a silly, fun, speed-based word game: https://apps.apple.com/us/app/scramble-game/id6748549424 (my record is <4 seconds lmk if you can beat it!)
Let me know what you think :D
It’s an instagram style UI but for scrolling through record releases and snippets, worked on making it responsive as possible with low latency audio playback so you can browse a lot of stuff quickly.
Wrote about it on my blog: https://www.polymonster.co.uk/blog/diig
And GitHub repo: https://github.com/polymonster/diig
It's a full funnel marketing attribution & insights tool with the intent of making marketing & marketing spends more transparent. We started from creating an utm tracking tool for our agency clients and currently it's a product on its own. We'll make it a platform to remove some of the limits that we have with WordPress and reach a larger audience.
Eu based.
So I built Riff Radar - it creates playlists from your followed artists' complete discography, and allows you to tailor them in multiple ways. Those playlists are my top listened to. I know, because you can also see your listening statistics (at the mercy of Spotify's API).
The playlists also get updated daily. Think of it as a better version of the daily mixes Spotify creates.
It's mostly where I want it to be now, but still need to automate the ingest of USPTO data. I'd really like it to show a country flag on the search results page next to each item, but inferring the brand name just from the item title would probably need some kind of natural language processing; if there's even a brand in the title.
No support for their mobile layout. Do many people buy from their phone?
A LLM‑powered OSINT helper app that lets you build an interactive research graph. People, organizations, websites, locations, and other entities are captured as nodes, and evidence is represented as relationships between them.
Usually assistants like ChatGPT Deep Research or Perplexity are focused on fully automatic question answering, and this app lets you guide the search process interactively, while retaining knowledge in the graph.
The plan is to integrate it with multiple OSINT-related APIs, scrapers, etc
I am building a tool that gives automated qualitative feedback on websites. This is the early and embarrassing MVP: https://vibetest-seven.vercel.app/product
You provide your URL and an LLM browses your site and writes up feedback. Currently working on increasing the quality of the feedback. Trying to start with a narrower set of tests that give what I think is good feedback, then increase from there.
If a tool like this analyzed your website, what would you actually want it to tell you? What feedback would be most useful?
My perfect user being someone who is either a body builder, powerlifter, or someone who just takes weightlifting seriously.
I've also been obsessed with making it iOS native and a one-time purchase.
Been trying to build in public on Bluesky: @tobu.bsky.social
Simple landing page with a waitlist: https://plates.framer.website/
Done with Godot in just 7-8 months, it's fun how fast you can create things when you really focus on something :)
Started from the poor state of many Python HTTP clients and poor testing utilities there is for them. (Eg the neglected state of httpx and its all perf issues)
Slice and Share; framing, diptychs, also helps share photos on social media without cropping: https://apps.apple.com/app/slice-and-share/id6752774728
Both are free, no ads, no account required. I use them myself; I’m looking to improve them too so feedback is very welcome.
Being a Ruby on Rails consultant, I frequently see active storage transformation becoming a bottleneck for web servers by eating up resources and making them sweat.
I built Fileboost to solve this problem for my customers. I'd love any feedback.
My biggest technical challenge remains dealing with the immense number of different APIs (and not-APIs) in the different status pages out there. Marketing remains my biggest overall challenge as my background is engineering, but I've learnt quite a bit since I launched this.
I just released the changelog 5 minutes ago https://intrasti.com/changelog which I went with a directory based approach using the international date format YYYY-MM-DD so in the source code it's ./changelog/docs/YYYY/MM/DD.md - seems to do the trick and ready for pagination which I haven't implemented yet.
We're focused on solo owners and other busy local service based companies.
We're using OpenAI's newest real time voice API, which has been surprisingly responsive.
We received data last week verifying we are effectively mineralizing CO2 at a high rate while saving our farmer $135/acre annually in liming costs.
We’ve come this far on grants. Now it’s time to fundraise so we can bankroll our PhDs whilst we secure pre-purchase offtake deals.
If you know of any impact investors or are an offtake buyer at a large company, please email me at zach@goal300.earth
It's a browser extension right now and the platform integrates with SSO providers and AI APIs, to help discover shadow AI, enforce policies and creates audit trails. Think observability for AI adoption but also Grammerly since we help coach endusers to better behavior/outcomes.
Early days but the problem is real, have a few design partners in the F500 already
They’re always on. They log into real sites, click around, fill out forms, and adapt when pages change — no brittle scripts, no APIs needed. You can deploy one in minutes, host it yourself, and watch it do work like a human (but faster, cheaper, never tired).
Kind of like a “browser-use cloud,” except it’s yours — open, self-hostable, and way more capable.
Most sites fall into extremes: Dribbble leans toward polished mockups that never shipped, while Awwwards and Mobbin go heavy on curation. The problem isn’t just what they pick — it’s that you only ever see a narrow slice. High curation means low volume, slow updates, and a bias toward showcase projects instead of the everyday, functional interfaces most of us actually design.
Font of Web takes a different approach. It’s closer to Pinterest, but purely for web design. Every “pin” comes with metadata: fonts, colors, and the exact domain it came from, so you can search, filter, and sort in ways you can’t elsewhere. The text search is powered by multimodal embeddings, so you can use search queries like “minimalist pricing page with illustrations at the side” and get live matches from real websites.
What you can do:
natural language search (e.g. “elegant serif blog with sage green”)
font search (single fonts, pairings, or 2+ combos, e.g https://fontofweb.com/search/pins?family_id=109 , https://fontofweb.com/search/pins?family_id=135 )
color search/sorting (done in perceptual CIELAB space not RGB)
domain search (filter by site, e.g. https://fontofweb.com/search/pins?domain=apple.com, https://fontofweb.com/search/pins?domain=blender.org )
live website analysis (via extension — snip any part of a page and see fonts/colors instantly, works offline)
one-click font downloads
palette extraction (copy hex codes straight to clipboard)
private design collections
Appreciate feedback into the ux/ui, feature set and general usefulness in your own workflow
Still a work in progress, but expecting to release by end of year. Built on Rust + Tauri, in case anyone is curious.
I've created various open-source and commercial tools in the multimedia space over the last 10+ years and wanted to put it all together into something more premium.
My partner shares our journey on X (@hustle_fred), while I’ve been focused on building the product (yep, the techie here :). We’re excited to have onboarded 43 users in our first month, and we're looking forward to getting feedback from the HN community!
It looks inside each file to see what it’s about, then moves it to the right folder for you.
Everything happens on your Mac, so nothing leaves your computer. No clouds, no servers.
It already works with PDFs, ePubs, text, Markdown, and many other file types. Next I’m adding Microsoft Office and iWork support.
If you have messy folders anywhere on your Mac, Fallinorg can help.
Self-hosted compute can also be linked to Daestro to run jobs on.
[0]: https://daestro.com
This is basically a variation on bit-packing (which is NP-hard), but it's tractable if you prune the search space enough.
You can check it out at https://antiques-id-1094885296405.us-central1.run.app/.
I think this project is an interesting addition as a software supply chain solution, but generating interest in the project in this early stage proves difficult.
For those interested, I maintain a spec in parallel of the development at https://github.com/asfaload/spec
Since I’m using AI to build it and have limited context window I started with the hardest first. So it only supports int64 now.
If you want to help
Obviously this is quite sensitive data so architected it to never store raw data, allow for bring-your-own-key, and even in team settings be fully private by default, everybody keeps control of all their results.
Started about six months ago, have some first users, and always looking for feedback!
It is a modified version of Shopify's CEO Tobi try implementation[0]. It extends his implementation with sandboxing capabilities and designed with functional core, imperative shell in mind.
I had success using it to manage multiple coding agents at once.
Right now I am working on adding historical tables extracted from filings, as well as historical financials and their calculations.
Still a work in progress, but please check it out
With more than 300 references and around 1500 entries, covering more than all the lemma given in the reference dictionary Plena Ilustrita Vortaro de Esperanto, I now consider it achieved. Well, apart some formatting of references where I still need to fix issues related to import of template/modules from an other wiki. :D
To give a perspective, in one of the Esperanto sentence collection referenced in the appendix, I found a bit more than 7000 terms mal- words, which once stripped of the most common inflection and affixes went down to 3000 entries. I didn't check in details this remaining set, but my guess is that the remaining difference was still mostly due to less frequent affix combinations that my naive filter didn't catch. For recall Esperanto is a highly agglutinative language and encourage the use of a regular affix set to express many derivative terms from a common stem, so empowering expressivity though combinatorial reuse. So only twice the size of the proposed entries in the appendix is a very low figure.
I initially had this project ideas years ago, and it came back to my mind as I started to contribute to the port of Raku into esperanto[3]. This came back as we were going through the considerations for the lsb routine, where LSB stands for Least Significant Bit. The common way to express least is malplej (countryman-of-most), which is generally ok but can be instead replaced by mej, for example if terseness is a highly weighted desired trait. That allows for example to use mejpezbit’ instead of some alternative synonym like malplej signifa duumaĵo.
[1] https://eo.wiktionary.org/wiki/Aldono:Pri_antonimoj
[2] https://en.wikipedia.org/wiki/Plena_Ilustrita_Vortaro_de_Esp...
Open sourcing them of course, I find that I can sketch out a basic idea with Co Pilot and it'll get 80% of the way there.
Godot is simply a joy , as long as you understand what it can do and what it can't.
It will never ever happen in my wildest dreams, but I want to make open source games full time.
I want the entire game industry to have to compete with high quality open source games and frameworks.
Assuming I ever have a chance to retire, I'll be a old man writing code for sport.
Making a photo-based calorie tracker accurate.
https://github.com/whyboris/Video-Hub-App & https://videohubapp.com/
Thinking about: A new take on LinkedIn/web-of-trust, bootstrapped by in-person interactions with devices. It seems that the problem of proving who is actually human and getting a sense of how your community values you might be getting more important, and now devices have some new tools to bring that within reach.
[0] https://github.com/paul-gauthier/entangled-pair-quantum-eras...
No fancy tech, vanilla HTML/CSS/JS.
Always looking for contributors if you know of stained glass in your area!
Here's Hirevire’s #buildinpublic stats for September 2025!
MRR Metrics
$6,691 MRR (+11.14% MoM ▲)
$398 is the average lifetime value and ARPU is $61.10
9.86% Net MRR churn rate and 14.29% customer churn
21435 (-24% MOM ▼) applications collected Conversion numbers
3.67% Visits to Trial signups
8.30% Trial to paid plans
You're hurting people who are using disposable email addresses because they are privacy focused though.
Updated the landing page just yesterday!
Landing page + waitlist: https://dailyselftrack.com/
Browser version here, if you're curious:
Check out my project and my short film at https://cinesignal.com/p/call154
All the Scores are human made, Galleries, and Reels.
I'm trying to make a place where AI enhances the creator and doesn't replace them. Check the about page.
https://explorer.monadicdna.com/
I'll be adding more features in the coming days!
Haunted house trope, but it's a chatbot. Not done yet, but it's going well. The only real blocker is that I ran into the parental controls on the commercial models right away when trying to make gory images, so I had to spin up my own generators. (Compositing by hand definitely taking forever).
I'm working on a web app that creates easy-to-understand stories and explainers for the sake of language learning. You can listen in your favourite podcast app, or directly on the website with illustrations.
I'm eager to add more languages if anyone is fluent/able to help me evaluate the text-to-speech.
Kind of have been wasting time with Cloudflare workers engine. Trying to build a system that schedules these workers for a lightweight alternative to GitHub actions. If you are interested in WASM feel free to reach out. Looking to connect with other developers working on the WASM space.
And an agentic news digest service which scrapes a few sources (like HackerNews) for technical news and create a daily digest, which you can instruct and skew with words.
It is a small playground for text, vision, and audio models that use Transformers.js, WebGPU, and MediaPipe.
There's no server, no tracking, and no data leaving your device, everything runs locally. The models download once, cache for offline use.
I like Arc Browser’s command panel and Chrome’s tab search, so I want to combine them and add some enhancements:
- Pinyin-based fuzzy search
- Search through history and bookmarks
- Custom keybindings
For now, I’m working on bringing !bang support to Moyu Search.
It’s got the base instruction set implemented and working. A CRT shader, resizable display, and swappable color palettes.
I’m working on sound and a visual debugger for it.
I have some work to do on the Haskell TigerBeetle client and the Haskell postgresql logical replication client library I wrote too.
Came from my frustration with Google Maps in Germany constantly having take-down requests for bad reviews and ratings. To get around this, we only list places we recommend.
In parallel, I'm trying to figure out how to train a LLM for SAST.
Also working on a GxP-compliant, offline-first, real -time synced QMS, but I’ve put that on hold in favor of optimizing my resume.
Should be as easy as updating all data in the data/ folder and you can get your own version. Mind you: getting the SVG logos right is the hard part
The use case for this is a bit niche, and better tools exist for this general problem in ORMs and so forth, but it works for a problem I have.
[0] https://apps.apple.com/us/app/reflect-track-anything/id64638...
AI sprite animator for 2D video games.
It is a DNS service for AWS EC2 to keep the ever changing IPs when you cannot use the Elastic IP like ASG or when you don't want to install any third party clients to your instances.
It fetches the IPs regularly via AWS API and assign them to fixed subdomains.
It is pretty new :) still developing actively.
Recording video lessons is a lot of work, often a few hours for a 10min lesson
And then after recording the lesson it’s hard to keep it up to date and often just easier to re-record the whole video
So i’m bringing together slides, screen recorder + camera recorder and timeline editor into a unified workflow
Alumina, a highly integrated CNC stack with a WASM/WebGL user interface, which resides entirely within the 8Mb flash of the ESP32: https://github.com/timschmidt/alumina-interface
egui-rad-builder, a GUI Rapid Application Development tool for the egui toolkit: https://github.com/timschmidt/egui-rad-builder
So I will rest for a few days :D
And currently working to make things shareable, also don't want to use database.
Here is the demo https://notecargo.huedaya.com/
New version is a rebuild in react with cleaner interface, localisation, a bunch of new features and lays the groundwork to allow full html docs instead of only markdown
Building a new layer of hyper-personalization over the web. Instead of generating more content, it helps you reformat and interact with what already exists, turning any page, paper, or YouTube video into a summary, mind-map, podcast, infographic or chat.
The broader idea is to make the web adaptive to how each person thinks and learns.
right now, it’s a better way to showcase your really specific industry skills and portfolio of 3D assets (i.e., “LinkedIn for VR/XR) with hiring layered on
starting to add onto the current perf analysis tools and think more about how to get to a “lovable for VR/XR”
You can tag your spots with or whatever emoji so scanning a map visually actually makes a difference. It still uses Google search to find places, so that part's familiar. Sharing is ridiculously easy too: long-press your map name, type in a friend's email, choose if they can edit or just view, and you're done. You can even stack multiple friends' maps together to see everyone's recommendations at once.
It's still in early stages of development, but core features are already there.
I always have stories in mind but don't have time to write them all out, this allows me to just enter the idea and then the story comes out.
It is hard to show that AI can reimplement for example special relativity - because we don't even have enough text from 19th century to train an LLM on it - so we need a new idea something that was invented after an LLM was trained. I took the Gwern's essay and checked with deep search and deep research which ideas from that essay are truly novel and apparently there are some so reinventing them seemed like a good target: https://github.com/zby/DayDreamingDayDreaming/blob/main/repo... https://github.com/zby/DayDreamingDayDreaming/blob/main/repo...
So here it is - a system that can reliably churn essays on daydreaming AIs. On one level it is kind of silly - we already knew that infinite monkeys could write Shakespeare works. The generator was always theoretically possible, the hard part is the verifier. But still - the search space in my system is much smaller than the search space of all possible letter sequences - so at least I can show that the system is a little more practical.
Here are some results: https://github.com/zby/DayDreamingDayDreaming/tree/main/data...
You can modify it to reinvent any other new idea - you just need to provide it the inspirations and evals for checking the generated essays.
I am thinking about next steps - maybe I could do it a little bit more universal - but it seems that to build something that would work as needed would require scale.
I kind of like the software framework I vibe coded for this. It lets you easily build uniform samples where you can legitimately do all kinds of comparisons. But I am not so sure about using Dagster as the base for the system.
The amount of fine tuning we've put into the model has been incredible. Starting to rival human multi-decade professionals in custom club fitting.
Feels like this will be how all human-tool interaction fitting will go.
I've been gathering up the supplies to set up a proper radio/computer repair workshop.
My first career was in sales. And most of the time these interactions began with grabbing a sheet of paper and writing to one another. I think small LLMs can help here.
Currently making use of api’s but I think small models on phones will be good enough soon. Just completed my MVP.
(But also just launched https://ChessHoldEm.net this weekend)
On-site surveys for eCommerce and SaaS. It's been an amazing ride leveling up back and forth between product, design, and marketing. Marketing is way more involved than most people on this site realize...
https://mikel-zhobro.github.io/3dgsim/
Spatial causality leads to generalisation not present in 2D models.
Beyond that, just regular random stuff that comes up here and there, but, for once, my hdd with sidelined projects is slowly being worked through.
The main idea is to bring as many of the agentic tools and features into a single cohesive platform as much as possible so that we can unlock more useful AI use-cases.
It’s fast, free, keyboard-only, cross-platform, and add-free. It’s been my only source of music for the past 6 months or so.
I’m not sharing the link because of music copyright issues. But I think more people should do that, to break free of the yoke of greedy music platforms.
- I think learning of new stuff is twisted in the current environment. "New stuff" in the sense of radio/Spotify is mostly "same stuff as I know and like, but slightly different so it feels new". You don’t discover truly new stuff unless by actively searching for it. No radio or service is going to passively do that for you.
Turns out there are a lot of businesses that constantly get banned and they need a reliable source of notifications about that
This month doubling down on a small house cleaning business that I acquired https://shinygoclean.com
Instead of code, seems like SOPs have become new love language!
Code obeys logic. People obey trust. That’s the real debugging. Still learning!
I discovered that "least common ancestor" boils down to the intersection of 'root-path' sets, where you select the last item in the set as the 'first/least common ancestor'.
Magically remove ads from any recipe website and automatically create meal plans for the week
(It was supposed to be completed months ago but got stuck in other issues)
Here's the waitlist and proposal: https://waitlist-tx.pages.dev
-Many say they want to stop doomscrolling and clout-chasing but I don't know how many are actually willing to do so
-Individuals may move here but their friends won't. So the feed will be initially empty by design. Introducing any kind of reward is against our ethos so we are clueless about how to convince existing friend circles to move.
Features: Chat with page, fix grammar, reply to emails, messages, translate, summarize, etc.
Yes, you can use your own API KEY.
please check it out and share your feedback https://jetwriter.ai
It runs fully on-device, including email classification and event extraction
AppGoblin is a free place to do app research for understanding which apps use which companies to monetize, track where data is sent and what kinds of ads are shown.
It has some rough edges, but I use it a ton and get a lot of value out of it.
I believe the old internet is still alive and well. Just harder to find now.
People won't read and skim all of those CTA, instead trie to give them an "aha, interesting" asap.
I'm trying to gather sources and read scientific papers to make a course on that topic, in France.
It's basically Snapchat, but without other people.
Currently in AppStore review!
Here's a link to the API docs page: https://docs.unwrangle.com.
It is a simple NPM package that generates colorful avatars from input data to aid in quick visual verification.
I would like to see it adopted as a standard.
https://apu.software/truegain/
Then it’s on to the next project.
Attracting new monthly sponsors and people willing to buy me the occasional pizza with my crappy HTML skills.
I want to write voip plugins using a modern tool chain and benefit from the wider crate eco system
Essentially, a platform to access articles from developer blogs.
I've been slowly adding new sources to the website. Any suggestions would be great.
I'm considering adding a feature that allows searching using vectors. Basically, you could search for something like "How to make sure your PostgreSQL database is configured correctly". And it would return the closest articles using vector search compared to your query. Is this something that sounds interesting?
1. Fluxmail - https://fluxmail.ai
Fluxmail is an AI-powered email app that helps you get done with email faster. There are a couple of core tenets/features that it has, including:
- local-first - we don't store your emails and we make interactions as fast as possible
- unified inbox - so you can view emails from all your email addresses in one place
- AI-native - helping you draft emails, search for emails, and read through your emails faster
I'd love to hear if these features resonate with you, or if there are other features that you feel are missing from your current email app.
2. ExploreJobs.ai - https://explorejobs.ai
This is a job board for AI jobs and companies. The job market in AI is pretty hot right now, and there are a lot of cool AI companies out there. I'm hoping to connect job seekers with fast-growing AI companies.
https://apps.apple.com/ch/app/diabetes-tagebuch-plus/id16622...
(It's a frontend to make searching eBay actually pleasant)
So I started working on Librario, an ISBN database that fetches information from several other services, such as Hardcover.app, Google Books, and ISBNDB, merges that information, and return something more complete than using them alone. It also saves that information in the database for future lookups.
You can see an example response here[1]. Pricing information for books is missing right now because I need to finish the extractor for those, genres need some work[2], and having a 5 months old baby make development a tad slow, but the service is almost ready for a preview.
The algorithm to decide what to merge is the hardest part, in my opinion, and very basic right now. It's based on a priority and score system for now, where different extractors have different priorities, and different fields have different scores. Eventually, I wanna try doing something with machine learning instead.
I'd also like to add book summaries to the data somehow, but I haven't figured out a way to do this legally yet. For books in the public domain I could feed the entire book to an LLM and ask them to write a spoiler-free summary of the book, but for other books, that'd land me in legal trouble.
Oh, and related books, and things of the sort. But I'd like to do that based on the information stored in the database itself instead of external sources, so it's something for the future.
Last time I posted about Shelvica some people showed interest in Librario instead, so I decided to make it something I can sell instead of just a service I use in Shelvica[3], hence why I'm focusing more on it these past two weeks.
[1]: https://paste.sr.ht/~jamesponddotco/de80132b8f167f4503c31187...
[2]: In the example you'll see genres such as "English" and "Fiction In English", which is mostly noise. Also things like "Humor", "Humorous", and "Humorous Fiction" for the same book.
[3]: Which is nice, cause that way there are two possible sources of income for the project.
That's the philosophy behind it https://medium.com/@chrisveleris/designing-a-life-management...
Very easy install, check it out!
https://github.com/leogout/rasper-ducky
Duckyscript is a language for the USB rubber ducky that costs approximately 100$. A usb rubber ducky is an usb key that gets recognized as a keyboard and that starts typing text and shortcuts automatically once you plug it to anything. To specify to the key what to type, you can use duckyscript.
I'm using circuitpython. The last thing I did was to de-recursify the interpreter with a stack.
The more I'm implementing of duckyscript, the more i think that i should create my own language. Duckyscript sucks as a language...
a tool to help California home owners to lower their property taxes. This works for people who bought in the past years low interest environment and are overpaying in taxes because of that.
Feel free to email me, if you have questions: phl.berner@gmail.com
Funny thing is, the advisor started to tell me to sell last week, and so I did. Then last Friday happened. Interesting.
It cuts online course creation to 1-2 hours and gives plenty of options for tutors to monetise.
Take a picture of an event flyer or paste in some text. The event gets added to your calendar.
1. is something that can poll a bunch of websites workshop/events pages to see if theres any new events [my mother] wants to go to and send a digest to her email
2. is a poller to look up the different safeway/coop/save on flyers and so on to see whats on sale between the different places, then send a mail with some recipes it found based on those ingredients
Im most of the way through 1, but havent started on 2 yet.
Still working on growing the audience.
Twitter but for games instead of tweets.
This is built with Rust, egui and SQLite3. The app has a downloader for NSE India reports. These are the daily end of day stock prices. Out of the box the app is really fast, which is expected but still surprises me. I am going to work on improving the stocks chart. I also want to add an AI assisted stocks analyst. Since all the stocks data is on the SQLite3 DB, I should be able to express my stocks screening ideas as plain text and let an LLM generate the SQL and show me in my data grid.
It was really interesting to generate it within 3 days. I had just a few places where I had to copy from app (std) log and paste into my prompt. Most of the time just describing the features was enough. Rust compiler did most of the heavy lifting. I have used a mix of Claude Code and OpenCode (with either GLM 4.5 or Grok Code Fast 1).
I have been generating full-stack web apps. I built and launched https://github.com/brainless/letsorder (https://letsorder.app/). Building full-stack web apps is basically building 2 apps (at a minimum) so desktop apps are way better it seems.
In the long-term, I plan to build and help others generated apps. I am building a vibe coding platform (https://github.com/brainless/nocodo). I have a couple early stage founders I consult for who take my guidance to generate their products (web and mobile apps + backend).
Next in the plans is adding more models and compare which one gives better results.
next step is to make a simple login portal for non trusted persons to be able to submit work as this a uni project, mail the result / process.
Very very beta. No stated mission just working with smart people on interesting ideas.
LLM/procedurally-generated fictional wikis with worldbuilding history/context so the wiki stays coherent. Fun project to make the most of LLM hallucinations
I was wondering if anyone would be interested in a platform where we share our repos and projects that was vibe-coded. Somewhere we can reflect on our experiences and share what we have learnt. No payments, or profit seeking behavior.
In my experience, the hardest part of vibe coding is getting through the initial hurdles every time I use a new tool. Whats worse is that it seem to be so obvious what I should of done in retrospect, like understanding the implications of using an sql database vs aws redmi.
I think this is where a platform could come in to aid us vibe-coders in sharing projects for free and learning from others mistakes, like the early silicon valley days. It might be a dumb one, but my thesis is that through sharing our experiments and experiences we can drastically reduce the learning curve for new programmers. I think be extremely beneficial for people with no prior experience, especially as we move to more agentified development.
But please let me know your thoughts and if anyone would be interested in working on this with me.
A simple document translator that preserves your file's formatting and layout.
It's a few things:
- very fast Japanese->English dictionary
- hiragana / katakana / number / time reading quizzes
- vocabulary quizzes based on wordlists you define and build
- learn and practice kanji anki-style (using FSRS algo)
- the coolest feature (imo) is a "reader": upload Japanese texts (light novels, children's books, etc), then translate them to your native language to practice your reading comprehension. Select text anywhere on the page (with your cursor) to instantly do a dictionary lookup. A LLM evaluates your translation accuracy (0..100%) and suggests other possible interpretations.
I just revamped the UI look and feel the other day after implementing some other user feedback! I'm now exploring ads as a way to monetize it.
- carcassonne game agent
Everything is still on private repos because it is too nasty, and Im shy
Think Lightning, but on EVM, with credit that’s provable and enforceable. No sequencers. No DA risk. O(1) per-hop scalability.
If you want to measure real intelligence — human or machine — paste this prompt into any LLM and watch what happens:
Prompt: “Comprehend the following concept in full depth. Derive all its implicit implications, design questions, and systemic consequences. Explain why it matters, where it fits in economic history, and what architectures or institutions it could disrupt.”
<START>
RCPE — Reserve-Credit Programmable Enforceable Layer-2 Netting-Account Network
Solves Coase theorem and Diamond Dybvig runs. Each relationship is a bounded delta of obligations — the physics equation of finance:
−Lₗ ≤ Δ ≤ C + Lᵣ
RCPE ⊃ { FCUU, FRPE }
FCUU Full-Credit Unprogrammable Unenforceable
(−5000 BCE → Now) C = 0 Δ ∈ [−Lₗ, +Lᵣ]
100 % of modern finance: Banking / SWIFT / Visa / CEX trust-based • no proof • censorable
[−−− Δ −−−]
FRPE Full-Reserve Programmable Enforceable
(2015 → 2020) Lₗ = Lᵣ = 0 Δ ∈ [0, +C]
BTC Lightning / Raiden / Hydra no credit • inbound capacity wall • capital lock
[==Δ=]
XLN (RCPE) ⊃ { FCUU, FRPE }
Δ ∈ [−Lₗ, C + Lᵣ]
cryptographically enforced debt + collateral account proofs with L1 dispute resolution O(1) unicast • sovereign exits • no DA risk
[--==Δ=--]
</END>
Example - prompted with Sonnet 4.5: https://claude.ai/share/99453e1a-1ce4-4a73-aa31-36b8bea7520c
Looking for VCs, co-founders, market makers. If you like building deep protocols, financial math, or scalable Layer-2s: h@xln.finance
To provide trading insights for users.
The core idea is to make progression easier to track and follow. After a workout, it analyzes your performance (weight, reps, and RIR), highlights new personal records (PRs), and generates specific targets for your next session. It also reviews your entire program to provide scientific analysis on weekly volume, frequency, and recovery for each muscle group. This gets displayed visually on an anatomy model to help you learn which muscles are involved, and you can track your gains over time with historical performance charts for each exercise.
During a workout, you get a total session timer, an automatic rest timer, and can see your performance from the last session for a clear target to beat. It automatically advances to the next incomplete exercise, and when you need to swap an exercise, it provides context-aware alternatives targeting the same muscles.
It's also deeply customizable:
- The UI has a dark theme, supports multiple languages (English, Spanish, German), lets you adjust the UI scale, and toggle the visibility of detailed muscle names, exercise types, historical performance badges, and a full history card. - You can set global defaults for weight units (kg/lbs), rest times, and plan targets, or enable/disable metrics like Reps in Reserve (RIR) and estimated 1-Rep Max. The exercise library can be filtered by your available equipment, you can create your own custom exercises with global notes, and there's a built-in weight plate calculator. - The progression system lets you define default rep ranges and RIR targets, or create specific overrides for different lifts (e.g., a 3-5 rep range for strength, 10-15 for accessories). - Editing is flexible: you can drag-and-drop to reorder days, exercises, and sets, duplicate workout days, track unilateral exercises (left/right side), and enter data with a quick wheel picker.
Next up: an MCP server so devs can pull data from SecurityBot's various monitors directly into their IDE.
An open source powerful network reconnaissance and asset discovery tool built with Go and React
It works by specializing for the common case of read-only workloads and short, fixed-length keys/includes (int, uuid, text<=32b, numeric, money, etc - not json) and (optionally) repetitive key-values (a common case with short fixed-length keys). These kinds of indexes/tables are found in nearly every database for lookups, many-many JOIN relationships, materialized views of popular statistics, etc.
Currently, it's "starting to work" with 100% code coverage and performance that usually matches/beats btree in query speed. Due to compression, it can consume as little as 99.95% less memory (!) and associated "pressure" on cache/ram/IO. Of course, there are degenerate cases (e.g. all unique UUID, many INCLUDEs, etc) where it's about the same size. As with all indexes, performance is limited by the PostgreSQL executor's interface which is record-at-a-time with high overhead records. I'd love help coding a FDW which allows aggregates (e.g. count()) to be "pushed down" and executed in still requires returning every record instead of a single final answer. OTT help would be a FDW interface where substantial query plans could be "pushed down" e.g. COUNT().
The plan is to publish and open source this work.
I'd welcome collaborators and have lots of experience working on small teams at major companies. I'm based in NYC but remote is fine.
- must be willing to work with LLMs and not "cheat" by hand-writing code.
- Usage testing: must be comfortable with PostgreSQL and indexes. No other experience required!
- Benchmarking, must know SQL indexes and have benchmarking experience - no pgsql internals required.
- For internals work, must know C and SQL. PostgreSQL is tricky to learn but LLMs are excellent teachers!
- Scripting code is in bash, python and Makefile, but again this is all vibe coded and you can ask LLMs what it's doing.
- any environment is fine. I'm using linux/docker (multi-core x86 and arm) but would love help with Windows, native MacOS and SIMD optimization.
- I'm open to porting/moving to Rust, especially if that provides a faster path to restricted environments like AWS RDS/Aurora.
- your ideas welcome! but obviously, we'll need to divide and conquer since the LLMs are making rapid changes to the core and we'll have to deal with code conflicts.
DM to learn more (see my HN profile)
Explain
Working on a plugin for langfuse to create evals functions and dataset from ingested traces automatically, based on ad-hoc user feedback.
It's an all-in-one toolkit with one-click version switching, automatic HTTPS for local domains, and an integrated mail catcher.
I've just rolled out some major updates: 1. Local AI Deployment: Now can run models like Llama 3 & Code Llama directly within ServBay. 2. Built-in Tunneling: Share the local sites with anyone on the internet, ngrok-style or frp or Cloudflare. 3. Windows is Live! The new Windows version is out and quickly reaching feature parity with our macOS app.
Next up is ServBay 2.0. I'm currently gathering feedback on features like deeper Docker integration and more flexible site configurations. I'd love to hear what the HN community thinks is important.
Check it out at: https://www.servbay.com
Interpret your bloodwork for free with a precision of a longevity clinic. You can calculate your biological age based on the best bioage calculators.
importing labs is something I think about a lot and I think the solution will be something along the lines of what you suggested. Since I need to keep 100% privacy which I publicly promise.
The first two posts are live: 1. Let There Be a Player — player movement and camera control (https://aibodh.com/posts/bevy-rust-game-development-chapter-...) 2. Let There Be a World — procedural world generation using Wave Function Collapse (https://aibodh.com/posts/bevy-rust-game-development-chapter-...)
Next up: adding physics, collisions, and interaction to make the world feel alive.
From there it’ll grow into combat, UI, sound, polish, and AI-driven NPCs.
Truly very impressive.
Throwing in mine. I've been working on solo deving godot games in the last year.
Working on yet another gambling roguelike.
https://store.steampowered.com/app/3839000/Golden_Gambit
I have an artist contacted to do my real assets now.
If anyone is practiced in game balance please reach out if you want to help!
What I'm building at the moment is a server monitoring solution for STUN, TURN, MQTT, and NTP servers. I wanted to allow the software for this to be portable. So I wrote a simple work queue myself. Python doesn't have linked-lists which is the data structure I'm using for the queues. They allow for O(1) deletes which you can't really get on many Python data structures. Important for work items when you're moving work between queues.
For the actual workers I keep things very simple. I make like 100 independent Python processes each with an event loop. This uses up a crap load of memory but the advantage is that you can parallel execution without any complexity. It would be extremely complex trying to do that with code alone and asyncio's event loop doesn't play well with parallelism. So you really only want one per process.
Result: simple, portable Python code that can easily manage monitoring hundreds of servers (sorry didnt mean for that to sound like chatgpt, lmao, incidental.) The DB for this is memory-based to avoid locking issues. I did use sqlite at first but even with optimizations there were locking issues. Now, I only use sqlite for import / export (checksums.)
Not anything special by HN standards but work is here: https://github.com/robertsdotpm/p2pd_server_monitor
I'm at the stage now where I'm adding all the servers to monitor to it. So fun times.
This is a free license plate tracking game for families on road trips. Currently adding more OAuth providers, and some time zone features.
I have been trying to study Chinese on my own for a while now and found it very frustrating to spend half the time just looking for simple content to read and listen to. Apps and websites exist, but they usually only have very little content or they ramp up the difficulty too quickly.
Now that LLMs and TTS are quite good I wanted to try it out for languages learning. The goal is to create a vast number of short AI-generated stories to bridge the gap between knowing a few characters and reading real content in Chinese.
Curious to see if it is possible to automatically create stories which are comfortable to read for beginners, or if they sound too much like AI-slop.
YouTube's algorithm is all about engagement - more video game videos, more brainrot, their algorithm doesn't care about the content as long as the kid is watching.
My system allows parents to define their children's interests (e.g., a 12-year-old who enjoys DIY engineering projects, Neil deGrasse Tyson, and drawing fantasy figures)
.. and specify how the AI should filter video candidates (e.g., excluding YouTube Shorts).
Periodically, the system prompts the child with something like
"Tell me about your favorite family vacation."
And their response to that prompt provides the system with more ideas and interests to suggest videos to them.
email me if you'd like to test jim.jones1@gmail.com
https://www.PAGE.YOGA - Link sharing website
https://www.GamesNotToPlay.com - A couple video games
https://www.ce0.ai - CEO Replacement
https://www.CellularSoup.com - Cellular Automata
https://www.fuck.investments - putting together a fine art gallery
man, myself needs work
In 2nd stage, I will mathematically establish the best course of action as an individual given the base theory.
In 3rd stage, I will explain common psychological phenomenon through the theory, things like narcissism, anxiety, self-doubt, how to forgive others, etc.
In 4th stage, I will explain how the theory is the fastest way to learn across multiple domains and anyone can become a generalist and critical thinker.
In 5th stage, I will explain how society will unfold if everyone can become generalist and critical thinker through the theory. And how this is the next big societal breakthrough like Industrial revolution.
In 6th and last stage, I will think about how to use this theory to make India the next superpower, as this theory can give us the demographic advantage.
Shared more about the algorithm here https://x.com/admiralrohan/status/1973312855114998185
What does that mean?
Not earth shattering, but something that should exist.
Essentially like yeoman back then, to bootstrap your webapp and all the necessary files more easily.
Currently I am somewhat stuck because of Go's type system, as the UI components require a specific interface for the Dataset or Data/Record entries.
For example, a Pie chart would require a map[string]number which could be a float, percentage string or an integer.
A Line chart would require a slice of map[string]number, where each slice index would represent a step in the timeline.
A table would require a slice of map[string]any where each slice index would represent a step in the culling, but the data types would require a custom rendering method or Stringifier(?) of sorts attached to the data type. So that it's possible to serialize or deserialize the properties (e.g. yes/no in the UI meaning true/false, etc).
As I want to provide UI components that can use whatever struct the developer provides, the Go way would be to use an interface. But that would imply that all data type structs on the backend side would have this type of clutter on them attached.
No idea if something like a Parser and Stringifier method definition would make more sense for the UI components here...or whether or not it's better to have something like a Render method attached per component that does all the stringifying on a per-property basis like a "func(dataset any, index int, column string) string" where the developer needs to do all the typecasting manually.
Manual typecasting like this would be pretty painful as components then cannot exist in pure HTML serialized form, which is essentially the core value proposition of my whole UI components framework.
An alternative would be offering a marshal/unmarshal API similar to how JSON does it, but that would require the reflect package which bloats up the runtime binary by several MB and wouldn't be tinygo compatible, so I heavily would wanna avoid that.
Currently looking for other libraries and best practices, as this issue is really bugging me a lot in the app I'm currently building [3] and it's a pretty annoying type system problem.
Feedback as to how it's solved in other frameworks or languages would be appreciated. Maybe there's an architectural convention I'm not aware of that could solve this.
[1] https://github.com/cookiengineer/gooey-cli
OpenRun allows defining your web app configuration in a declarative config using Starlark (which is like a subset of Python). Setting up a full GitOps workflow is just one command:
openrun sync schedule --approve --promote github.com/openrundev/openrun/examples/utils.star
This will set up a scheduled sync, which will look for new apps in the config and create them. It will also apply any config updates on existing apps and reload apps with the latest source code. After this, no further CLI operations are required, all updates are done declaratively. For containerized apps, OpenRun will directly talk to Docker/Podman to manage the container build and startup.
There are lots of tools which simplify web app deployment. Most of them use a UI driven approach or an imperative CLI approach. That makes it difficult to recreate an environment. Managing these tools when multiple people need to coordinate changes is also difficult.Any repo which has a Dockerfile can be deployed directly. For frameworks like Streamlit/Gradio/FastHTML/Shiny/Reflex/Flask/FastAPI, OpenRun supports zero-config deployments, there is no need to even have a Dockerfile. Domain based deployment is supported for all apps. Path based deployment is also supported for most frameworks, which makes DNS routing and certificate management easier.
OpenRun currently runs on a single machine with an embedded SQLite database or on multiple machines with an external Postgres database. I plan to support OpenRun as a service on top of Kubernetes, to support auto-scaling. OpenRun implements its own web server, instead of using Traefik/Nginx. That makes it possible to implement features like scaling down to zero and RBAC. The goal with OpenRun is to support declarative deployment for web apps while removing the complexity of maintaining multiple YAML config files. See https://github.com/openrundev/openrun/blob/main/examples/uti... for an example config, each app is just one or two lines of config.
OpenRun makes it easy to set up OAuth/OIDC/SAML based auth, with RBAC. See https://openrun.dev/docs/use-cases/ for a couple of use cases examples: sharing apps with family and sharing across a team. Outside of managed services, I have found it difficult to implement this type of RBAC with any other open source solution.
I also created a proxy bridge for Oracle databases, which runs on an open-source protocol, as part of the core tool's ecosystem. https://github.com/OrmFactory/o-bridge