Show HN: I built a social media management tool in 3 weeks with Claude and Codex
152 points
6 hours ago
| 15 comments
| github.com
| HN
FireInsight
4 hours ago
[-]
I am genuinely in the "target market" for a tool such as this, but having evaluated one previously I found the quality and self-hosting experience to be pretty bad, and that a proprietary freemium product was still a better experience.

I'm hesitant to even take a look at this project due to the whole "vibe coded in 3 weeks" thing, though. Hearing that says to me that this is not serious or battle-tested and might go unmaintained or such. Do you think these are valid concerns to have?

reply
spicyusername
4 hours ago
[-]
We're entering an era where the delivering of software is cheap. Basically any idea can have an MVP implemented by one or two people in just a month or two now. Very quickly the industry is learning what the next set of bottlenecks are, now that the bottleneck is no longer writing code.

Planning, design, management alignment, finding customers, integrating with other products, waiting for review, etc. Basically all the human stuff that can't be automated away.

Your comment reminds me to add building a support team to the list.

reply
localhoster
4 hours ago
[-]
Was it ever? Even before llm, writing software, or at least web clients, was as easy as it can get.
reply
written-beyond
4 hours ago
[-]
I agree, software (software startups) has always been the golden child of investors because of how cheap it is compared to hardware or any other physical good.

Good software is expensive regardless of the involvement of LLMs because you need someone to take responsibility. Large companies will save a buck because there may be fewer people needed to take said responsibility, but it's probably a marginal saving compared to the overall scheme of things.

reply
varispeed
1 hour ago
[-]
It was "easy" in the sense you could deploy 7B equivalent of a developer, who could get something sort of working eventually or you could spend a lot of money for actually getting the results from talented developers - equivalent of daily maxing out Opus 4.6.
reply
jcmeyrignac
1 hour ago
[-]
Delivering is cheap, but maintenance is expensive... if not impossible!
reply
holoduke
27 minutes ago
[-]
It's not. Maintainance is easy now. We have at least 200 legacy products in various old languages. Never it has been easier to work on those since Claude came into the picture. The argument I hear about support being expensive is not true I think.
reply
apsurd
10 minutes ago
[-]
Maintenance of mountains more agentic code.

The argument is the agents can maintain what the agents build. But someone has to manage the explosion in complexity.

I just quit my job because there was top down explosion of shipping agentic code. I don't think it's going to work and I don't want my job to be maintaining someone else's 50x code output.

reply
KellyCriterion
1 hour ago
[-]
...and customer support!
reply
sixtyj
4 hours ago
[-]
I see your point.

Last time I “vibe coded” something (internal) and I liked it because I couldn’t find external solution.

I admire coders who can finish their code into deliverable and usable piece.

Issue here is software abundance and ppl will start to hesitate due to absurd pile that they should evaluate.

It reminds me the statistics of ice cream global sales. People want certainty so they choose chocolate or vanilla :)

Therefore many good software projects will have a problem to find users.

reply
swasheck
3 hours ago
[-]
i think we need to encode (or refine) what we mean by “vibe code.” my original impression was that it was used to describe the process whereby someone with an idea but lacking development/engineering skills leveraged llm via an agent to create the mechanics to bring their idea to fruition. anymore it seems like if it has the hint of AI then it’s “vibe coded.”

ironically, i didn’t read the article because i come to comments now to see if its been identified as AI slop, so i don’t know which area this falls into

reply
dghlsakjg
2 hours ago
[-]
Definitely. The original vibe coding tweet is pretty explicit about “forgetting the code exists”, and “throwaway weekend projects” (https://x.com/karpathy/status/1886192184808149383)

Now however, many people just use it to mean any ai-assisted coding.

One person says “vibe coding” to mean a throwaway script to scrape some page. Others use it to mean a code reviewed app built by a team using Claude.

It is so broad as to be meaningless at this point

reply
sixtyj
2 hours ago
[-]
Cool. Thus I am an ai/llm-assisted coder amateur. I don’t code for living, I know principles (or I think so) but don’t remember syntax (too old to learn new tricks :)
reply
m000
4 hours ago
[-]
I agree. It's not like this project is disrupting an overpriced product/SaaS.

E.g. Buffer charges around $50 per year per social media account, which gives you an unlimited number of collaborating user accounts. And their single user plans are even cheaper.

I don't see how self-hosting would be a worthy investment of your time/effort in this case, unless you are in some grossly mismanaged organization where you have several devops engineers paid for doing literally nothing.

reply
deaux
21 minutes ago
[-]
I know solo bootstrappers who have 5+ accounts across platforms for one app, it's quite normal for B2C.
reply
bryanhogan
2 hours ago
[-]
$50 is a high price to ask, no? And I just looked it up, it's actually even more than that.

Consider having an account for each common social media platform, then multiply that for every project, that grows quickly.

reply
m000
7 minutes ago
[-]
You are right. My memory failed me there. I should have done a quick lookup for the pricing.

It's $120/year/account for multi-user setup, and $60/year/account for single-user.

Which is still dirt cheap if you use social media professionally. E.g. what would $360 buy you if you try to do self-hosting? Maybe a day of work from a devops engineer to get this deployed for you?

reply
63stack
4 hours ago
[-]
The era of sharing some small programs that you made with others to benefit from is over imo.

You can just vibe code it yourself. If your requirements are narrower (eg. you only need support for 3 networks and not 12), you will end up with something that takes less time to develop (possibly less than a day), it will have a smaller surface for problems, and it will be much better tailored to your specific needs. If you pay attention to what the LLM is doing it will also be easier to maintain or extend further.

The surface for security vulnerabilities also gets narrower, since you "only" have to trust the LLM (which is still a huge ask, but still better than LLM + 1 random person).

reply
ninkendo
3 hours ago
[-]
> The era of sharing some small programs that you made with others to benefit from is over imo. > You can just vibe code it yourself.

+1.

The password manager I use full time now is “Kenpass”, which has exactly one user: me. I have it on iOS/macOS/Linux, browser extensions, CLI and (native, no electron) UI for each, syncs with my homelab server over my wireguard tunnel, and it covers all my use cases. Took me maybe a week (a few hours total, spread around.) I feel no reason to share it with anyone, it does exactly what I need and I only need to fix the bugs I find, for myself.

We’re really living in crazy times.

> If you pay attention to what the LLM is doing it will also be easier to maintain or extend further

That's another nice part: I actually really enjoy feng-shui refactoring code to fit my tastes, and I've given the LLM's code a bunch of refactoring passes essentially "just for fun". I understand the codebase enough that sometimes I implement features myself instead of having the LLM do it, if I'm in the mood to. But I'd probably never have the time or energy to start such a project from scratch... having the initial MVP done in essentially one shot was a huge boost.

reply
deaux
23 minutes ago
[-]
> I have it on iOS/macOS/Linux, browser extensions, CLI and (native, no electron) UI for each, syncs with my homelab server over my wireguard tunnel, and it covers all my use cases. Took me maybe a week (a few hours total, spread around.)

A few hours including iOS? Don't you need to go through lots of hoops even if it's only for personal use? Isn't it time-limited or something?

reply
alasano
2 hours ago
[-]
> kenpass

> username "ninkendo"

Absolutely checks out.

Please share more ken related software names you use lol

reply
flir
1 hour ago
[-]
Everyone needs a to-do list: kendo.
reply
ElFitz
3 hours ago
[-]
> The surface for security vulnerabilities also gets narrower, since you "only" have to trust the LLM (which is still a huge ask, but still better than LLM + 1 random person).

On top of which, any such vulnerabilities will be mostly low value: n different implementations, each with their own idiosyncrasies, 90% of them serving one person.

reply
bryanhogan
2 hours ago
[-]
I think the same way, I'd love a social media management tool, all of the ones I found were insanely expensive or not usable / had horrible usability issues. Pitching a product by telling me it's built quickly with AI does the opposite of convincing me to try it, even though I'm in the market for the solution offered.
reply
jrm4
47 minutes ago
[-]
My gut is -- of course these concerns are valid, but, especially with "newfangled" software projects like this, I'd genuinely be surprised to see major quality differences between "human" and "vibe-coded."

I think what I mean by "newfangled" is; this isn't a low-level C memory managed bit flipping thing; this is the sort of thing that's already built on top of layer of layer of the cruft that the web already is, for better or worse.

reply
baq
4 hours ago
[-]
You can vibe code minor fixes to some annoyances including the clanker managing the whole fork/pull request flow if you want to contribute back for $20/mo on codex or claude (though $20 is the free trial tier there, codex is nearly so since last week but should be good enough... for now).
reply
pablogiuffrida
2 hours ago
[-]
i did something similar, and i agree that 3 weeks sounds like a tight timeframe. You need to test what "vibe coding" delivers VERY thoroughly.

It can be done though. And i say it as a developer for 20 years now.

reply
TrackerFF
4 hours ago
[-]
Lots and lots of commercial software is being vibe coded. Big difference here is that at least the OP honest about it.
reply
bdangubic
3 hours ago
[-]
I think our industry really needs to get on top of terms / names and fast. To me, “vibe coded” means “100% (or close to it) of this was written by LLM and I do not have a slightest clue about technology or hacking or anything related to this domain.” if this is the case here (or anywhere else) I am not touching it with a 10-foot pole (even with honesty of the author). something like “LLM assisted” would be a whole other thing.
reply
TrackerFF
1 hour ago
[-]
To me "vibe coding" is like Karpathy wrote in that tweet: You completely surrender to the model, accept everything, never look at the changes in code. Only feed prompts and check the resulting product.

But, of course, that's not how most programmers use - at least I'd like to think. One thing is when people with zero programming knowledge vibe code something in that fashion, but for serious (read: big) products I'd like to think that you're still forced to do some book keeping.

reply
flir
53 minutes ago
[-]
I've done some personal CRUD stuff like that. Worked well - I fixed bugs as I found them, only hit one thing it couldn't do (I find codex weak at front-end stuff generally).

Would never publish it though, or approach paid work like that.

reply
kordlessagain
3 hours ago
[-]
I have a clue, a big one, and do 100% vibe coding. Stop splitting hairs.
reply
bdangubic
2 hours ago
[-]
if you had a clue you would not be doing 100% vibe coding :)
reply
ipaddr
3 hours ago
[-]
I was hoping this was the opposite of a creators platform - a social media users platform. Download all social media to one place (stories/posts) where you can view on your own schedule.

Is there anything like that out there?

reply
jareklupinski
22 minutes ago
[-]
RSS can get you there, lots of clients but you have to teach your friends how to make their own websites and set up a feed

made me think of Pidgin, the chat client that could talk to any chat server

reply
xnx
1 hour ago
[-]
Big upvote for this. I want an "agent" (overused term) to scroll through all the user-hostile feeds and present what I actually want to see in a calm, ad-free, manner.

Tools like instaloader are a start, but screen scraping might be the best bet to avoid detection/banning.

reply
jezzamon
1 hour ago
[-]
Platforms really don't want you to build that. But depends on what platforms your talking about, open ones like bsky and mastodon could allow for something like that
reply
foobar_______
3 hours ago
[-]
Same. I was hoping for this. As much as social media frustrates me, the content can be great at times. I want an aggregation tool where I have strict control on the output. Give me the content minus the addictive never ending scroll with inflammatory posts. Basically, I want a benevolent curation of media on my terms.
reply
itherseed
2 hours ago
[-]
I want the same. The problem is that this kind of tool is outside the T&C of the platforms, and if they catch you using it, you would probably be banned or suspended. There will be always a small risk of that.
reply
vlachen
3 hours ago
[-]
I'm taking the submission as: if you've got an idea, here's some methods to work build something.

I have a few ideas if my own, perhaps yours is something that could be created.

reply
echelon
3 hours ago
[-]
You can build one in three weeks with Claude. I'm not joking - you'll have energy for side projects you never had before, and you'll finish them.
reply
criddell
1 hour ago
[-]
Nice! A bot-built tool for posting content mostly generated by other bots and engaged with by bots.

I don't mean to belittle the cool tool you made, I'm just grumpy about the loss of what the social network could have been and what we got when it morphed into social media.

reply
JanSchu
6 hours ago
[-]
I wanted to test how far AI coding tools could take a production project. Not a prototype. A social media management platform with 12 first-party API integrations, multi-tenant auth, encrypted credential storage, background job processing, approval workflows, and a unified inbox. The scope would normally keep a solo developer busy for the better part of a year. I shipped it in 3 weeks.

Before writing any code, I spent time on detailed specs, an architecture doc, and a style guide. All public: https://github.com/brightbeanxyz/brightbean-studio/tree/main...

I broke the specs into tasks that could run in parallel across multiple agents versus tasks with dependencies that had to merge first. This planning step was the whole game. Without it, the agents produce a mess.

I used Opus 4.6 (Claude Code) for planning and building the first pass of backend and UI. Opus holds large context better and makes architectural decisions across files more reliably. Then I used Codex 5.3 to challenge every implementation, surface security issues, and catch bugs. Token spend was roughly even between the two.

Where AI coding worked well: Django models, views, serializers, standard CRUD. Provider modules for well-documented APIs like Facebook and LinkedIn. Tailwind layouts and HTMX interactions. Test generation. Cross-file refactoring, where Opus was particularly good at cascading changes across models, views, and templates when I restructured the permission system.

Where it fell apart: TikTok's Content Posting API has poor docs and an unusual two-step upload flow. Both tools generated wrong code confidently, over and over. Multi-tenant permission logic produced code that worked for a single workspace but leaked data across tenants in multi-workspace setups. These bugs passed tests, which is what made them dangerous. OAuth edge cases like token refresh, revoked permissions, and platform-specific error codes all needed manual work. Happy path was fine, defensive code was not. Background task orchestration (retry logic, rate-limit backoff, error handling) also required writing by hand.

One thing I underestimated: Without dedicated UI designs, getting a consistent UX was brutal. All the functionality was there, but screens were unintuitive and some flows weren't reachable through the UI at all. 80% of features worked in 20% of the time. The remaining 80% went to polish and making the experience actually usable.

The project is open source under AGPL-3.0. 12 platform integrations, all first-party APIs. Django 5.x + HTMX + Alpine.js + Tailwind CSS 4 + PostgreSQL. No Redis. Docker Compose deploy, 4 containers.

Ask me anything about the spec-driven approach, platform API quirks, or how I split work between the two models.

reply
dontwannahearit
4 hours ago
[-]
How much of the specs themselves came from the LLM? The development schedule https://github.com/brightbeanxyz/brightbean-studio/blob/main... has very AI-looking estimates for exampl and I can see a commit in the architecture.md file which is exclusively changing em-dashes to normal dashes (https://github.com/brightbeanxyz/brightbean-studio/commit/74...) which suggests you wanted to make it seem less LLM-generated?

I ask, not to condemn, but to find out what your process was for developing the requirements. Clearly it was done with LLM help but what was the refinement process?

reply
JanSchu
4 hours ago
[-]
The spec document was also written by Claude (over many iteration) and lots of manual additions. It took me tho 4 full days to get the specs to the level I was happy with.

One main thing I did was to use the deep research feature of Claude to get a good understanding of what other tools are offering (features, integrations etc.)

Then each feature in the specs document got refined with manual suggestions and screenshots of other tools that I took.

reply
drabbiticus
2 hours ago
[-]
Thanks for sharing!

> Before writing any code, I spent time on detailed specs, an architecture doc, and a style guide. All public: https://github.com/brightbeanxyz/brightbean-studio/tree/main...

> It took me tho 4 full days to get the specs to the level I was happy with.

When I click on history there I see only a single commit for these docs. Would you be willing to share some or all of the conversation you had with the LLM (in a gist or in the repo) that led to these architecture docs? Understand if you can't, but I'm sure it would be super instructive for people trying to understand the process of doing something like this and the types of guide rails that help to move the process forward productively.

reply
vladsanchez
2 hours ago
[-]
First, congrats on your accomplishment(s) and leveraging your AI+Python+WebDev talents.

Isn't this a SaaS-pocaplyse testament? What's stopping anyone from doing the same to BrightBean? What's stopping anyone with a little of domain knowledge and a $200+ Claude to clone your app and build yet another gap-filling, slightly improved content-syndication version and go-to-market? Is it worth taking it to the market when anyone can perpetuate the cycle?

I'm genuinely interested in knowing your thoughts.

reply
DrammBA
1 hour ago
[-]
> Isn't this a SaaS-pocaplyse testament? What's stopping anyone from doing the same to BrightBean?

It being open source doesn't help it either, so easy to malus/chardet it.

reply
dewey
5 hours ago
[-]
Thank you for this write up, this is much more interesting than all the "Show HN" that don't mention anything about AI but you can see it on every corner.

What you describe has also been my experience so far with building projects mostly with AI but with detailed specs but Rails instead of Django.

reply
mrsekut
4 hours ago
[-]
That was an interesting article. I have a few questions about the workflow.

1. You mentioned developing tasks in parallel—how many agents were you actually running at the same time? Did you ever reach a point where, even if you increased the degree of parallelism, merging and reviews became the bottleneck, and increasing the number further didn’t speed things up?

2. I really relate to the idea of “80% of features in 20% of the time, then 80% on polish.” Did you use AI for this final polishing phase as well? In other words, did you show the AI screenshots of the screens and explain them? Also, when looking back, do you feel that if you had written the initial specifications more carefully, you could have completed the work faster?

reply
JanSchu
3 hours ago
[-]
What I did was to break the development into different layers which had to be completed after another, since the functionalities build on each other. Each layer had independent work streams which run in parallel. Each work stream was one independent worktree/session in Claude code

First I triggered all work streams per layer and brought them to a level of completion I was happy with. Then you merge one after another (challenge in github with the @codex the implementation and rebases when you move to the next work stream.

This is roughly how it looked like:

Layer 0 - Project Scaffolding

Layer 1 — Core Features Stream A — Content Pipeline Stream B — Social Platform Providers Stream C — Media Library Stream D — Notification System Stream E — Settings UI

                        T-0.1 (Scaffolding)
                              │
                        T-0.2 (Core Models + Auth)
                              │
          ┌───────────────────┼───────────────────┬──────────────┐
          │                   │                   │              │
     Stream A            Stream B            Stream C       Stream D
     (Content)           (Providers)         (Media)        (Notifs)
          │                   │                   │              │
     T-1A.1 Composer    T-1B.1 FB/IG/LI    T-1C.1 Library  T-1D.1 Engine
          │              T-1B.2 Others           │              │
     T-1A.2 Calendar         │                   │         Stream E
          │                  │                   │         T-1E.1 Settings UI
     T-1A.3 Publisher ◄──────┘                   │
          │                                      │
          └──────────◄───────────────────────────┘
          (Publisher needs providers + media processing)

Layer 2 — Collaboration & Engagement Stream F — Approval & Client Portal Stream G — Inbox Stream H — Calendar & Composer Enhancements Stream I — Client Onboarding

          Layer 1 complete
                │
    ┌───────────┼───────────┬──────────────┐
    │           │           │              │
 Stream F   Stream G    Stream H       Stream I
 (Approval  (Inbox)     (Calendar+     (Onboarding)
  + Portal)              Composer
    │                    enhance)
 T-2F.1 Approval
    │
 T-2F.2 Portal
Thus I did run up to 4 agents in parallel, but o be honest this is the max level of parallelism my brain was able to handle, I really felt like the bottleneck here.

Additionally, your token usage is very high since you are having so many agent do work at the same time, hence I very often reached my claude session token limits and had to wait for the next session to begin (I do have the 5x Max plan)

reply
jbk
4 hours ago
[-]
This is amazing. I started doing the same, but I did not have the time to polish it.

Questions: why no X? Do you have a feature to resize (summarize?) to the text to fit into short boxes?

reply
hirako2000
3 hours ago
[-]
How much do it cost in token?
reply
incidentnormal
5 hours ago
[-]
What did your harness look like for this?
reply
stavros
5 hours ago
[-]
This is interesting, how do you publish to LinkedIn? I thought they didn't allow automated posts.
reply
dewey
5 hours ago
[-]
reply
stavros
4 hours ago
[-]
Very helpful, thanks!
reply
hyperionultra
5 hours ago
[-]
Why postgre instead of classic mysql?
reply
purerandomness
5 hours ago
[-]
MySQL does not let you have transactional DDL statements (alter, create, index etc).

If you're building anything serious and your data integrity is important, use Postgres.

Postgres is much stricter, and always was. MySQL tried to introduce several strict modes to mitigate the problems that they had, but I would always recommend to use Postgres.

reply
faangguyindia
4 hours ago
[-]
such apps should use sqlite. it's enough for this type of app.
reply
eb0la
3 hours ago
[-]
MySQL or Postgres are the DB of choice if you want a managed database in the cloud.

Probably Postgres is there because you can use it as a queue (https://livebook.manning.com/book/just-use-postgres/chapter-...)

reply
hk__2
5 hours ago
[-]
Why mysql instead of postgres should be the right question nowadays.
reply
dewey
5 hours ago
[-]
Postgres isn't a newcomer any more. For most projects that I see it's the default and the "classic" already.
reply
JanSchu
4 hours ago
[-]
Postgres is simply a battle proven technology.
reply
hk__2
5 hours ago
[-]
Nothing wrong here, but Django/HTMX seem quite 'old' technologies to me for a new project made in 2026. Nowadays I use FastAPI/SQLAlchemy for the backend and SvelteKit on the frontend.
reply
rrr_oh_man
4 hours ago
[-]
You don’t need a Drillator-X 3000 AI Ready™ if a simple screwdriver gets the job done. IMHO the main thing technical people get wrong about B2B problems.

Also calling HTMX old makes me feel old.

reply
hk__2
29 minutes ago
[-]
> You don’t need a Drillator-X 3000 AI Ready™ if a simple screwdriver gets the job done

Yes, and Django feels like more of a Drillator than a simple screwdriver to me.

reply
JanSchu
4 hours ago
[-]
yeah htmx is from 2020, it feels like yesterday
reply
benterix
4 hours ago
[-]
SvelteKit is also from 2020.
reply
JanSchu
4 hours ago
[-]
I do have originally a data science background, thus python is usually my go to language, and have a lot of experience with django already. This helps a lot when reviewing AI code and if you have to judge architecture, etc.

And for hmtx I simply wanted to have something lightweight that is not very invasive to keep things simple and dependencies low.

In my head this was a good consideration to keep complexity low for my AI agents :-)

reply
hk__2
28 minutes ago
[-]
Sure there is nothing wrong here, I was just talking about a feeling. HTMX is quite recent, but this idea of embedding logic into HTML reminds me of the old jQuery days.
reply
JodieBenitez
4 hours ago
[-]
> Django/HTMX seem quite 'old' technologies to me for a new project made in 2026.

It's simple, it works, it's efficient, safe, and there are tons of online resources for it. Excellent choice, even more so when using a coding agent.

reply
purerandomness
4 hours ago
[-]
FastAPI is quite old (2018)

Svelte even older (2016, SvelteKit was just an new version in 2022)

SQLAlchemy is ancient (2006)

Use newer tech, like HTMX (2020)

(/s obviously)

reply
_heimdall
4 hours ago
[-]
HTMX is 5 years old, version 2 is just under 2 years old, and the last release (2.0.7) came out 7 months ago.
reply
kuba-orlik
1 hour ago
[-]
I don't get it. The app is about social media, but its website is smth about YouTube Intelligence API for AI Agents. Not sute what to make of it
reply
themonsu
4 hours ago
[-]
Does it work with multiple social accounts? E.g. if I have 100 customers whose social medias I manage for content posting.
reply
JanSchu
4 hours ago
[-]
yes
reply
domo__knows
3 hours ago
[-]
Legitimately cool project OP. As a Django developer working in the social space I'm sure I'll be referencing your workflows.
reply
xnx
3 hours ago
[-]
Isn't automated posting forbidden by most platforms, and will risk getting any account banned?
reply
JanSchu
3 hours ago
[-]
Each platform has an api that you can use to post. You just have to setup a developer account for each platform
reply
hirako2000
3 hours ago
[-]
That depends on the platform. And can change at any moment. The same applies to paid for tool..this merely makes it open source/ self hosted.
reply
alexdobrenko
2 hours ago
[-]
has there been any final word on whether social platforms are throttling posts that come from platforms like this?
reply
throwatdem12311
4 hours ago
[-]
Why does it matter how long it took you to make it?
reply
ms7892
4 hours ago
[-]
Woah! I was looking for something like this from a long time
reply
banbangtuth
3 hours ago
[-]
Just curious. Why Python? Why not say, Go or TypeScript? Yes you can make TypeScript server rendered too without React stuffs.
reply
nottorp
4 hours ago
[-]
Is it in Rust too?
reply
pbiggar
2 hours ago
[-]
Can you add support for UpScrolled? https://upscrolled.com/en/
reply
JanSchu
1 hour ago
[-]
I'll add to the implementation list
reply
benmarten
5 hours ago
[-]
No x?
reply
JanSchu
4 hours ago
[-]
I did not include it yet, because you have to pay for the API. They changed their pricing model recently to pay only per request. I'll be looking into it the next weeks
reply
forsalebypwner
5 hours ago
[-]
their API is insanely expensive
reply
donohoe
4 hours ago
[-]
I’d argue it’s not worth it. Engagement and referral traffic from it continue to tank.
reply
cyanydeez
5 hours ago
[-]
Do people still think twitter is a valuable place (besides being bot owners).
reply
brobdingnagians
5 hours ago
[-]
It seems like geopolitical statements and international announcements happen a lot on Twitter/X these days.
reply
cyanydeez
1 hour ago
[-]
Are those statements actionable or simply to feed polymarket and US stocks.
reply
grvdrm
4 hours ago
[-]
I’m not a power user/poster but I see it as no less valuable than many other similar places. All of them have similar problems. For me it’s probably bifurcated by time spent tuning the feeds.
reply
bengale
5 hours ago
[-]
I know some people have ideological things going on that make them choose different networks, but they have more than half a billion active users so it's not exactly a ghost town.
reply
rglullis
3 hours ago
[-]
I go check it out once a month, it's nothing but bots, AI hustlers and Musk jerking. Why would any brand actively invest in their presence there?
reply
rocketpastsix
4 hours ago
[-]
how many of those "active" users are just bots?
reply
spiderfarmer
4 hours ago
[-]
In The Netherlands it’s a full on crazy town. I’m not kidding. It’s bottom of the barrel vitriolic garbage. Not one positive , insightful or interesting tweet among them.
reply