It looks like that limit has been changed more recently: https://stackoverflow.com/questions/32950503/can-i-have-100s... but I'm still skeptical it's what you want. You'd need a consumer for each of them, you're tracking offsets for all of them.
Much more normal would be a single topic here, partitioned on user.
You can have 2 consumers then. One that pulls things off and notifies. Another one that pulls things off and stores to the DB, which is indexed by user ID.
Then when they read a notification, you flip a bit in the DB for the notifications.
If each user has a topic your partitions are unbounded. You have at least one partition per user.
I’d use a single notification topic, set a reasonable number of partitions on it and partition by user id.
Use the topic as a firehouse.
You can have a consumer sending push notifications.
You can have a consumer writing to a database and have the user inbox in the db with a read flag on the message. Etc. Users query the db, Kafka queues your writes. You may end up with consistency issues if using push notifications as mentioned above. If sending push notification you’d want to do cdc off your inbox table so the notification is only sent once it’s in the inbox.
You don’t necessarily need Kafka for this. It’s being used as a queue here instead of a log unless you want to keep every notification event sent to rebuild the inbox tables but then you’ll need to publish read states and start event sourcing and treating the inbox table as a materialised view.
Big rabbit hole. FIFO queue sounds simplest if you want asyn notification handling
A more pragmatic and simple approach would be to have the consumer shove it into a database, then your app pulls from the DB. This is more persistent and is optimized for the kinds of queries you want to pull.
2. db for longterm storage with frequent reads or updates
3. S3 etc. compressed files for longterm archival storage
4. Flink, beam etc for realtime or batch processing when some logic, transformations or aggregation needed on messages