I'm working on Brain Hurricane (brainhurricane.ai). It's the kind of structured tool I wish I'd had in my career. I was tired of unstructured brainstorming sessions that recycled the same ideas and the passive waiting for a "great idea" that never arrives.
My goal was to create a systematic process. It uses AI to help you generate ideas with proven methods like SCAMPER and Six Thinking Hats, then immediately analyze them with frameworks like SWOT, PESTEL, and the Business Model Canvas. It's about moving from a fuzzy concept to a validated idea with more confidence and clarity.
On a personal level, this project was my way of diving headfirst into modern AI development. I'm building it with Next.js, TypeScript, Python, and Linux, which has been a fun and humbling experience coming from a more traditional enterprise stack.
It's still early, but the core features are live. I'd genuinely appreciate any feedback from the HN community, especially from those who have struggled to turn abstract ideas into something concrete.
Here's the clickable link for anyone interested: https://brainhurricane.ai
I'm an Aged Care nurse of 13 years, taught myself to code 5 years ago and am obsessed about automating nursing tasks(i.e auditing, funding, quality) because the volume of admin work that is required by Nurses is absurd and the industry is very far behind and very resistant to: change, spending money and technology in general.
I have been shouting into an empty void the last 3 years but that is okay, i am patient.
I mostly focus on standalone, local AI tools that do a task and are open ended(manual file upload) to suit the 20 million different software in aged care. Keep it all as simple as possible and minimal hurdles.
Generally using llama.cpp, Qwen3, python and then wrapping in some sort of ugly GUI or more recently- AutoHotKey. The nurses feel powerful pressing a few buttons with ahk and watching work be done. (Avoids command line, avoids me being paralyzed by front end stuff).
I don't know why i am sharing this as i am way out of my depth here but there you go. If anyone else is in the Aged Care space, give me a shout. *edit because i can't format new lines or spell.
It's been a fun challenge as the games are pretty clustered in terms of scoring, and the games themselves are random with minimal points scored. I'm also not the biggest fan of hockey, so it's been fun for me to see which teams are ranked high.
I've been leaning on AI for the first time which has been interesting; I see a ton of content with AI around web dev, but less around more data science. It's interesting how quickly AI will break a common sense rule, like data leakage. Really fun learning experience!
In terms of platform, I've been having a ton of fun with static sites. Cheaper to host and more secure, all I need is a domain name to get it accessible on the web.
I'm working on Teletable (https://teletable.app), a macOS app that shows live football & F1 standings/results with a teletext interface (think BBC Ceefax). It's free and on the appstore:
https://apps.apple.com/us/app/teletable-football-teletext/id...
The idea is quite simple: improve supply chain security by having a validated mirror of NPM, PyPI, Cargo, etc.
There's a lot of static and runtime behavioral analysis that can be done as a baseline but it will always be possible to bypass since it's a cat and mouse game. I'm therefore also looking into how tooling and maybe LLMs would be able to assist humans in reviews and allow better scaling.
Currently on the more academic stage of the project (research, talking to professors and connections in industry, etc.) to hopefully start off with a good design to iterate off of.
My reference for the project stems from what I saw in Huawei during my internship as they had quite a bureaucratic system to review dependencies and an internal "secure" mirror. The goal is to hopefully generalize it such that supply chain security is accessible to small/medium companies or even individuals.
Happy to get any feedback :)
Finally getting close to relaunch. Sales have been stable but no growth since traffic has been going down this year.
I am rewriting https://createaclickablemap.com/ I started changing some thing last year adding micro services with NodeJs I am using VueJS for the new editor and Laravel for the back-end. Added several features that had plan over the years. I am 98% there and mostly prepping for the migration. Will switch to subscription and add couple of different plans.
I would be incredibly grateful for any feedback – I'm looking to genuinely improve the experience. Specifically, I'm wondering whether it is easy to use and what it lacks.
I have a huge backlog to cover for this, but so far it has been great fun and I have learnt an incredible range of things!
Built entirely with SwiftUI + RealityKit, it’s been an incredible journey into VisionOS and spatial computing.
Here’s the TestFlight link: https://testflight.apple.com/join/tWS4CERT
You can browse the catalog at these addresses:
It's a minimalist time zone converter. The real value add, in my opinion, is that it lets you look up multiple locations and add them all to list that updates in real-time. I built this a few years ago but I made a bunch of UX and quality of life changes recently. I have metrics around usage but I would be curious to chat with some users to get their take on how it can be improved.
This has been a productive weekend so far. I've recently solved an issue with cron jobs that was driving me mad for ages, and finally feel like I'm close to a first tagged release. I have just popped linting into the GitHub CI.
- The idea: https://carlnewton.github.io/posts/location-based-social-net...
- A build update and plan: https://carlnewton.github.io/posts/building-habitat/
- The repository: https://github.com/carlnewton/habitat
- The project board: https://github.com/users/carlnewton/projects/2
It’s still looking pretty rough around the edges.
https://www.inclusivecolors.com/
It's more aimed at designers right now that have some familiarity with designing color palettes and want to customize everything, but I want to add more hand holding later. Open to feedback!
Current focus: Ant-ban strategies for higher / lower cost throughput. Trying to identify constraints to calculate feasibility, both technical and financial. This may be slightly controversial here since many are averse to bots and scraping. I’ve actually increased per-request costs because I suspect scraping will become more restricted and less tolerated over time — the supply-side signals point that way.
Ideas I'm thinking about: Since I'm steering away from the higher concurrency/low cost scraping option — the new ideas I'm thinking about are: increasing data granularity, retailer coverage, adding an MCP server to help users query and analyse the E-commerce data they're extracting with the APIs as well.
Background: I’ve been building this solo from India for about four years. It began as freelancing, then became an API product around a year ago. Today, I have ~90 customers, including a few reputed startups in California. For me the hardest parts are social, not technical or financial — staying connected to US working culture can feel inverted from here. I’ve applied to YC a few times and might again.
In October I finished the PDF parser. It was a big challenge extracting PDF contect with correct paragraph breaks on user's computer locally. I'm gonna write about this soon.
Now I'm working on a web extension that talks to the app that run locally on your system so you can use WithAudio in your browser with very good performance, 100% local and private.
This includes a correlation matrix with rolling correlation charts, a minimap, hierarchical clustering, time series detrending, and more. I've improved its design and performance and I'm developing new features to better contextualize the visible subsection relative to the entire dataset.
I've also rewritten the entire project in Svelte 5 (there's still a lot of cleanup to do).
Allows you to listen to live online radio streams.
I wanted something with a minimal and fast UI and none of the other web apps I could find really fit my needs so I built this.
During work I like to listen to online radio so it seemed like a no brainier to make for myself and if others enjoy it to, even better.
No clout-chasing ragebait news or doomscrolling. See updates from your friends and that's it.
site link: https://intimost.com/login/
demo creds:
test@example.com
Demo123!
More context: (show HN) https://news.ycombinator.com/item?id=45721134
A recipe collection from Eastern spiritual traditions.
If you follow certain traditions, there may be a certain way to eat and cook.
This is the start of a collecting them in one place.
This week I’m thinking about whether it makes sense to provide a location history ‘vault’, designed to let users expose their location history to LLM’s as context.
It has a rich free tier, simple API, and a client dashboard that is easy to use. I do my best to build a service that I would love to use as a software engineer.
Building a tool to check your site layout and copy from multiple devices. Uses gpt-5 vision to find inconsistencies in headings/images.
If this isn't something people want then it should be shut down.
I am trying to use wasm/web-workers to execute actions for Git related workflows (think GitHub actions but much lighter). Currently, working otel related stuff and a small engine to run distributed tasks on Cloudflare workers.
The architecture is deliberately minimal: ZeroMQ based broker, coordinating worker nodes through a rather spartanic protocol that extends MajorDomo. Messages carry UUIDs for correlation, sender/receiver routing, type codes for context-dependent semantics and optional (but very much used) payloads. Pipeline definitions live in YAML files (as do worker and client configs) describing multi-step workflows with conditional routing, parallel execution, and wait conditions based on worker responses. Python is the language of the logic part.
I am trying to follow the "functional core, imperative shell" philosophy where each message is essentially an immutable, auditable block in a temporal chain of state transformations. This should enable audit trails, event sourcing, and potentially no-loss crash recovery. A built-in block-chain-like verification is something I'm currently researching and could add to the whole pipeline processing.
The hook system provides composable extensibility of all main user-facing "submodules" through mixin classes, so you only add complexity for features you actually need. The main pillars of functionality, the broker, the worker and the client, as well some others, are designed to be self contained monolithic classes (often breaking the DRY principle...), whose additional functionality is composed rather than inherited through mixins that add functionality while at the same time minimizing the amount of added "state capital" (accent on behaviour rather than state management). The user-definebale @hook("process_message"), @hook("async_init"), @hook("cleanup") etc. cross-cut into the lifecycle of each submodule and allow for simple functionality extension.
I'm also implementing a very simple distributed virtual file system with unixoid command patterns (ls, cd, cp, mv etc) supporting multiple backends for storage and transfer; i.e. you can simply have your data worker store files it subscribes to in a local folder and have it use either its SSH, HTTPS or FTPS backend to serve these on demand. The data transfers employ per file operation ephemeral credentials, the broker only orchestrates metadata message flow between sender and receiver of the file(s), the transfer happens between nodes themselves. THe broker is the ultimate and only source of truth when it comes to keeping tabs on file tables, the rest sync, in part or in toto, the actual, physical files themselves. The VFS also features a rather rudimentary permission control.
So where's the ML part, you might ask? The framework treats ML models as workers that consume messages and produce outputs, making it trivial to chain preprocessing, inference, postprocessing, fine-tuning, and validation steps into declarative YAML pipelines with human checkpoints at critical decision points. Each pipeline can be client-controlled to run continuously, step-by-step, or interrupted at any point of its lifecycle. So each step or rather each message is client-verifiable, and clients can modify them and propagate the pipeline with the corrected message content; the pipelines can define "on_correction", "on_rejection", "on_abort" steps for each step along the way where the endpoints are all "service" that workers need to register. The workers provide services like "whisper_cpp_infer", "bert_foo_finetune_lora", "clean_whitespaces", "openeye_gpt5_validate_local_model_summary", etc., the broker makes sure the messages flow to the right workers, the workers make sure the messages' content is correctly processed, the client (can) make(s) sure the workers did a good job.
Sorry for the wall of text and disclaimer: I'm not a dev, I'm an MD who does a little programming as a hobby (thanks to gen-AI it's easier than ever to build software).