Last year, PlasticList found plastic chemicals in 86% of tested foods—including 100% of baby foods they tested. Around the same time, the EU lowered its “safe” BPA limit by 20,000×, while the FDA still allows levels roughly 100× higher than Europe’s new standard.
That seemed solvable.
Laboratory.love lets you crowdfund independent lab testing of the specific products you actually buy. Think Consumer Reports × Kickstarter, but focused on detecting endocrine disruptors in your yogurt, your kid’s snacks, or whatever you’re curious about.
Find a product (or suggest one), contribute to its testing fund, and get full lab results when testing completes. If a product doesn’t reach its goal within 365 days, you’re automatically refunded. All results are published publicly.
We use the same ISO 17025-accredited methodology as PlasticList.org, testing three separate production lots per product and detecting down to parts-per-billion. The entire protocol is open.
Since last month’s “What are you working on?” post:
- 4 more products have been fully funded (now 10 total!)
- That’s 30 individual samples (we do triplicate testing on different batches) and 60 total chemical panels (two separate tests for each sample, BPA/BPS/BPF and phthalates)
- 6 results published, 4 in progress
The goal is simple: make supply chains transparent enough that cleaner ones win. When consumers have real data, markets shift.
On https://laboratory.love/faq you say: "We never accept funding from companies whose products we might test. All our funding comes from individual contributors."
On https://laboratory.love/blog you say: "If you're a product manufacturer interested in having your product tested, we welcome your participation in funding."
Here is something I'm struggling with as a user. I look at a product (this tofu for example [0]) and see the amounts. And then I have absolutely no clue what it means. Is it bad? How bad? I see nanograms one place and μg in an info menu - is μg a nanogram? And what is LOQ? Virtually 0? Simply less than the recommended amount?
I think 99% of people will have the same reaction. They will have no idea what the information means.
I clicked on some info icons to try and get more context. The context is good (explains what the different categories are) but it still didnt help me understand the amounts. I went to "About" and it didnt help with this. I went to the FAQ and and the closest I can find is:
>What makes a result 'concerning'?
We don't make safety judgments. Instead, we compare results to established regulatory limits from FDA, EPA, and EFSA, noting when products exceed these thresholds. We also flag when regulatory limits themselves may be outdated based on new research.
I understand that you don't want to make the judgement and it's about transparency and getting the information. But the information is worthless if people dont know what it meant.
1. An example result is "https://laboratory.love/product/117", which is a list of chemicals and measurements. Is there a visualization of how these levels relate to regulations and expert recommendations? What about a visualization of how different products in the same category compare, so that consumers know which brand is supposedly "best"? Maybe a summary rating, as stars or color-coded threat level?
2. If you find regulation-violating (or otherwise serious) levels of undesirable chemicals, do you... (a) report it to FDA; (b) initiate a class-action lawsuit; (c) short the brand's stock and then news blitz; or (d) make a Web page with the test results for people to do with it what they will?
3. Is 3 tests enough? On the several product test results I clicked, there's often wide variation among the 3 samples. Or would the visualization/rating tell me that all 3 numbers are unacceptably bad, whether it's 635.8 or 6728.6?
4. If I know that plastic contamination is a widespread problem, can I secretly fund testing of my competitors' products, to generate bad press for them?
5. Could this project be shut down by a lawsuit? Could the labs be?
1. I'm still working to make results more digestible and actionable. This will include the %TDI toggle (total daily intake, for child vs adult and USA vs EU) as seen on PlasticList, but I'm also tinkering with an even more consumer-friendly 'chemical report card'. The final results page would have both the card and the detailed table of results.
2. I have not found any regulation-violating levels yet, so in some sense, I'll cross that bridge when I get there. Part of the issue here is that many believe the FDA levels are far too relaxed which is part of why demand for a service like laboratory.love exists.
3. This is part of the challenge that PlasticList faced, and additionally a lot of my thinking around the chemical report card are related to this. Some folks think a single test would be sufficient to catch major red flags. I think triplicate testing is a reasonable balance of statistically robust while not being completely cost-prohibitive.
4. Yes, I suppose one could do that, as long as the funded products can be acquired by laboratory.love anonymously through their normal consumer supply chains. Laboratory.love merely acquires three separate batches of a given product from different sources, tests them at an ISO/IEC 17025-accredited lab, and publishes the data.
5. I suppose any project can be shut down by a lawsuit, but laboratory.love is not currently breaking any laws as far as I'm aware.
The UK levels are more strict and generally more up to date, which I personally follow rather than FDA. Could be nice to show those violations as a comparison to FDA.
What bugs me is that plastics manufacturers advertise "BPA-free", which is technically correct, but then add a very similar chemical from the same family that has the same effect on the plastic - which is good - but also has the same effect on your endocrine system
I'll add subscriptions as a more formal option on laboratory.love soon!
Disclaimer: I don't think I can have a 365-day refund with a recurring donations like this. The financial infrastructure would add too much complexity.
Serious question: around 1900 meat was often preserved using formaldehyde, and milk was adulterated with water and chalk, and sometimes with pureed calf brains to simulate cream.
I hope we can agree that we are better off than that now.
What I'm curious about is whether you think it's been a steady stream of improvements, and we just need to improve further? Or if you think there was some point between 1900 and now where food health and safety was maximized, greater than either 1900 or now, and we've regressed since then?
Trying to collapse high dimensional, complex phenomena onto a single axis usually gives one a fake sense of certainty. One should avoid it as much as possible.
And yet we report gun deaths per year, smoking rates, sea warming, etc. etc. The error isn't in producing or considering an aggregate result, but in ignoring where it came from. Since this is an internet forum and not a policy think tank I think that error is largely moot.
Or put another way: it was a simple question that the ggp can answer or not as they choose. I was just curious for their perspective.
My instinct is that things have largely gotten better over time. At a super-macro level, in 1900 we had directly adulterated food that e.g. the soldiers receiving Chicago meat called "embalmed". In the mid-20th century we had waterways that caught fire and leaded gas.
By the late 20th we had clean(er) air (this is all from a U.S. perspective) and largely safe food. I think if we were to claim a regression, the high point would have to be around 2000, but I can't point to anything specific going on now that wasn't also going on then -- e.g. I think microplastics were a thing then as well, we just weren't paying attention.
Where are you? This project is not necessarily limited to products that are available in the United States. Anything that can be shipped to the United States is still testable.
Given the current reach of the project (read: still small!), I suspect for awhile yet the majority of successfully funded testing will be by concerned individuals with expendable income. It is cheaper and much faster to go through laboratory.love than it would be to partner with a lab as an individual (plus the added bonus that all data is published openly).
I've yet to have any product funded by a manufacturer. I'm open to this, but I would only publish data for products that were acquired through normal consumer supply chains anonymously.
this looks so cool! I wish it told me if the levels found for tested products were good/bad - I have no prior reference so the numbers meant nothing to me
Both of them do measurements and YouTube videos. Neither one has a particularly good index of their completed reviews, let alone tools to compare the data.
I wish I could subscribe to support a domain like “loud speaker spin tests” and then have my donation paid out to these reviewers based on them publishing new high quality reviews with good data that is published to a common store.
A couple of months ago, I saw a tweet from @awilkinson: “I just found out how much we pay for DocuSign and my jaw dropped. What's the best alternative?”
Me being naive, I thought “how hard could would it actually be to build a free e-sign tool?”
Turns out not that hard.
In about a weekend, I built a UETA and ESIGN compliant tool. And it was free. And it cost me less than $50. Unlimited free e-sign. https://useinkless.com/
You’d be surprised how much trust people place in legal departments, balance sheet strength and talent capacity. All things for which I had to turn down superior technical proposals in the past. The old saying „Nobody gets fired for buying IBM“ still runs strong.
Free e-signatures are a great idea, have you considered getting a foundation to back the project and maybe taking out some indemnity insurance, perhaps raising a dispute fund?
Couldn’t agree more, trust is the currency in enterprise SaaS.
At Flowmono Sign, https://www.flowmono.com/en-US/ we’ve seen how adding layers like audit trails, compliance verification, and insured uptime completely changes how legal and procurement teams engage with e-sign tools.
Free is great, but trusted and simple is what keeps adoption steady.
https://penneo.com/ is a good alternative. And while I applaud your effort to do something in this space, personally I'd prefer a solution that's been thought over by lawyers, etc. Faster is not better in this particular space.
Really neat build, always great seeing founders scratch their own itch.
At Flowmono Sign, https://www.flowmono.com/en-US/ we’re tackling the same pain point but focused on helping scaling teams automate signing, approvals, and compliance in one flow.
Love seeing this kind of innovation in the space!!
I am working on making ultra-low cost freeze-dried enzymes for synthetic biology.
For example, 1 PCR reaction (a common reaction used to amplify DNA) costs about $1 each, and we're doing tons every day. Since it is $1, nobody really tries to do anything about it - even if you do 20 PCRs in one day, eh it's not that expensive vs everything else you're doing in lab. But that calculus changes once you start scaling up with robots, and that's where I want to be.
Approximately $30 of culture media can produce >10,000,000 reactions worth of PCR enzyme, but you need the right strain and the right equipment. So, I'm producing the strain and I have the equipment! I'm working on automating the QC (usually very expensive if done by hand) and lyophilizing for super simple logistics.
My idea is that every day you can just put a tube on your robot and it can do however many PCR reactions you need that day, and when the next day, you just throw it out! Bring the price from $1 each to $0.01 + greatly simplify logistics!
Of course, you can't really make that much money off of this... but will still be fun and impactful :)
As a bio hobbyist, this is fantastic! I don't do enough volume of PCR to think of it as expensive, but your use case of high-volume/automatic sounds fantastic! (And so many other types of reagents and equipment are very expensive).
Some things that would be cool
- Along your lines: In general, cheap automated setups for PCR and gels
- Cheap/automatic quantifiable gels. E.g. without needing a kV supply capillary, expensive QPCR machines etc.
- Cheaper enzymes in general
- More options for -80 freezers
- Cheaper/more automated DNA quantification. I got a v1 Quibit which gets the job done, but new ones are very expensive, and reagent costs add up.
- Cheaper shaking incubator options. You can get cheap shakers and baters, but not cheap combined ones... which you need for pretty much everything. Placing one in the other can work, but is sub-optimal due to size and power-cord considerations.
- More centrifuges that can do 10kG... this is the minimum for many protocols.
- Ability to buy pure ethanol without outrageous prices or hazardous shipping fees.
- Not sure if this is feasible but... reasonable cost machines to synthesize oglios?
I've thought a lot about this! My main goal is to create a cloud lab that doesn't suck - ie, a remote lab that is actually useful for people, and a lot of these are relevant things. Let me run down the ideas I have for each
1. You can purchase gel boxes that do 48 to 96 lanes at once. I'd ideally have it on a robot whose only purpose is to load and run these once or twice a day. All the samples coming through get batched together and run
2. Bioanalyzer seems nice for quantification of like PCRs to make sure you're getting the right size. But if I'll be honest I haven't though that much about it. But qPCRs actually become very cheap, if you can keep the machines full. You can also use something like a nanodrop and it is much much cheaper
3. Pichia pastoris expression ^
4. You can use a plate reader (another thing that goes bulk nicely), but the reagents you can't really get around (but cheaper in bulk from China)
5. If you aggregate, these become really cheap. The complicated bits are getting the proper cytomat parts for shaking, as they are limited on the used market
6. These can't be automated well, so I honestly haven't thought too much about it.
7. Reagents cheaper in bulk China
8. ehhhh, maybe? But not really. But if you think about a scaled centralized system, you can get away with not using oligos for a lot of things
That sounds really cool. I wouldn't agree you can't make money off this, you can make money off anything, just find people who need this and it seems you did find it.
Anyhow good luck. Would love to follow if you do anything with this in the future. Do you have a blog or anything?
I've been working on a 3D voxel-based game engine for like 10 years in my spare time. The most recent big job has been to port the world gen and editor to the GPU, which has had some pretty cute knock-on effects. The most interesting is you can hot-reload the world gen shaders and out pop your changes on the screen, like a voxel version of shadertoy.
I also wrote a metaprogramming language which generates a lot of the editor UI for the engine. It's a bespoke C parser that supports a small subset of C++, which is exposed to the user through a 'scripting-like' language you embed directly in your source files. I wrote it as a replacement for C++ templates and in my completely unbiased opinion it is WAY better.
A simple comment, but wow I really like the look of Bonsai! The lighting, shading and shapes are really beautiful, I think a game made in this would feel really unique
A project to implement 1000 algorithms. I have finished around 400 so far and I am now focusing on adding test cases, writing implementations in Python and C, and creating formal proofs in Lean.
It has been a fun way to dive deeper into how algorithms work and to see the differences between practical coding and formal reasoning. The long-term goal is to make it a solid reference and learning resource that covers correctness, performance, and theory in one place.
The project is still in its draft phase and will be heavily edited over the next few months and years as it grows and improves.
If anyone has thoughts on how to structure the proofs or improve the testing setup, I would love to hear ideas or feedback.
Wow, that looks fun and probably get to learn a lot about algorithms.
I don't have any feedback, but rather a question, as I've seen many repositories with people sharing their algorithms, at least on GitHub for many different languages (e.g. https://github.com/TheAlgorithms), what did you find that was missing from those repositories that you wanted to write a book and implement hundreds of algorithms, what did you find that was lacking?
Those algorithms implement so random to me, with lack of explanation, no test cases, no formal proof, and often inconsistent naming or structure across languages. Many repositories like TheAlgorithms are great collections, but they feel more like code dumps than true learning resources. You can find an implementation of Dijkstra or QuickSort, but often there is no context: why it works, how to prove it correct, what the complexity is, or how to test it against edge cases. For someone who wants to learn algorithms deeply, that missing layer of reasoning and validation is critical.
No organization for learners either. It jumps straight into implementations without a logical flow from fundamentals. I want to build something more structured: start from the very foundation (like data structures, recursion, and complexity analysis), then move to classical algorithms (search, sort, graph, dynamic programming), and eventually extend to database internals, optimization, and even machine learning or AI algorithms. Basically, a single consistent roadmap from beginner to researcher level, where every algorithm connects to the next and builds intuition step by step.
Another very good resource for beginners is https://www.hello-algo.com. At first, i actually wanted to contribute there, since it explains algorithms visually and in simple language. But it mostly covers the basics and stops before more advanced or applied topics. I want to go deeper and treat algorithms as both code and theory, with mathematical rigor and formal proofs where possible. That is something I really liked about Introduction to Algorithms (CLRS) and of course The Art of Computer Programming (TAOCP) by Knuth. They combine reasoning, math, and practice. My goal is to make something in that spirit, but more practical and modern, bridging the gap between academic books and messy open source repos.
Another reason for writing the book is that many developers see "algorithms" as something only needed for FAANG interviews, not for real work. For beginners and even seniors, learning algorithms often just means doing LeetCode problems, which most people dislike but feel forced to do.
I want to change that view and show that algorithms are beautiful and useful beyond interviews. They appear everywhere, from compilers to databases to the Linux kernel, where I found many interesting data structures worth exploring. (i will share more about this topic later)
I hope to share more of these insights and connect with others who enjoy discussing real world algorithm design, which is what I love most about the Hacker News community (except for the occasional trolls that show up from time to time).
For more context, I actually used The Algorithms as a reference when working on my own programming language, Mochi, which includes around 150–300 algorithms (I don't remember exactly) implemented directly in Mochi. These are then transpiled to over 25 programming languages such as C, Haskell, Java, Go, Scala, and more:
https://github.com/mochilang/mochi/tree/main/tests/algorithm...
The VM and transpiler were originally implemented by hand, and later I used Codex to help polish the code. The generated output works, though it is a bit messy in places. Hopefully, after finishing a few books, I can return to the project with more experience and add better use cases for it.
Great idea. I had been thinking about pretty much the same but perhaps targeted at executives and perhaps including AI/Cloud.
I usually feel to many people wildly through around terms they hardly understand, in the belief they cannot possibly understand. That’s so wrong, every executive should understand some of what determines button line. It’s not like people skip economics because it’s hard.
Would love to perhaps contribute sometime next year. Stared and until then good luck - perhaps add a donation link!
Thanks! I completely agree. For more than ten years consulting, training and architecting systems for clients across government and enterprise, I have seen the same pattern. Long before "big data", "cloud" and now with "AI" and "GenAI" these buzzwords have often been misunderstood by most of the C-suite. In my entire career, explaining the basics and setting the right expectations has always been the hardest part.
I really like your idea of targeting executives and connecting it to real business outcomes. Getting decision makers to truly understand the fundamentals behind the technology would make a huge difference.
I hope the next generation learns to love "C" and Algorithms again. I have rediscovered my appreciation for C recently, even though Go is my main professional programming language.
I feel like the presentation of Lomuto's algorithm on p.110 would be improved by moving the i++ after the swap and making the corresponding adjustments to the accesses to i outside the loop. Also mentioning that it's Lomuto's algorithm.
These comments are probably too broad in scope to be useful this late in the project, so consider them a note to myself.
C as the language for presenting the algorithms has the advantage of wide availability, not sweeping performance-relevant issues like GC under the rug, and stability, but it ends up making the implementations overly monomorphic. And some data visualizations as in Sedgewick's book would also be helpful.
My biggest inspiration for this project, though, is The Art of Computer Programming (TAOCP), that level of depth and precision is the ultimate goal. I'm also planning to include formal proofs of all algorithms in Lean, though that could easily turn into a 10-year project.
Sedgewick's Algorithms book is great for practical learning but too tied to Java and implementation details. It is a bit shallow on theory, though the community and resources for other languages help.
That said, I personally prefer Introduction to Algorithms (CLRS) for its formal rigor and clear proofs, and Grokking Algorithms for building intuition.
The broader goal of this project is to build a well tested, reference quality set of implementations in C, Python, and Go. That is the next milestone.
Was the very first edition of Sedgewick's Algorithms written in Pascal? I heard that but never actually saw that version myself.
Your comment brought back an old memory for me. My first programming language in high school was Turbo Pascal. That IDE was amazing: instant compilation, the blue screen TUI, F1 for inline help, a surprisingly good debugger, and it just felt so smooth and fast back then. No internet needed, no AI assistance, just pure focus and curiosity. Oh, how I really miss those days :)
For clarification, I meant the Algorithms, 4th Edition book at https://algs4.cs.princeton.edu/home/ which is entirely in Java. All the example code, libraries, and exercises there use Java, and the authors explicitly note that the book is written for that language.
However, you are right, Prof. Sedgewick has long maintained versions of his material across multiple languages. I remember that the third edition has C, C++ and Java versions.
To reduce the current monomorphism, I might add a generic version using void* and a comparator, or generate code for a few key types, while keeping the simple monomorphic listings for readability. (Though this would make the code a bit more complex)
Nice to see that you are still around with this after your https://news.ycombinator.com/item?id=45448525 was flagged because of LLM slop issues of your work. How are you addressing those?
i am working on something pretty radical in this space. it is a book of algorithms that derives all the algorithms without telling what the algorithm is. for example for binary search your book quickly went into the low + high / 2 = mid thing. my method is radically different. i take an even sized array then try to actually find it step by step , then take an odd sized array , find it step by step, derive a general hypothesis and then create the formula from it for that algorithm. this is going to be orders of magnitude above any data structures and algorithms books and courses when it comes out. pinky promise
I found a neat way to do high-quality "semantic soft joins" using embedding vectors[1] and the Hungarian algorithm[2] and I'm turning it into an open source Python package:
It hits a sweet spot by being easier to use than record linkage[3][4] while still giving really good matches, so I think there's something there that might gain traction.
I see you saved a spot to show how to use it with an alternative embedding model. It would be nice to be able to use the library without an OpenAI api key. Might even make sense to vendor a basic open source model in your package so it can work out of the box without remote dependencies.
Yes, I'm planning out-of-the-box support for nomic[1] which can run in-process, and ollama which runs as a local server and supports many free embedding models[2].
If you're adding more LLM integration, a cool feature might be sending the results of allow_many="left" off to an LLM completions API that supports structured outputs. Eg imagine N_left=1e5 and N_right=1e5 but they are different datasets. You could use jellyjoin to identify the top ~5 candidates in right for each left, reducing candidate matches from 1e10 to 5e5. Then you ship the 5e5 off to an LLM for final scoring/matching.
Currently working on an open source Heroku / Fly.io / Render alternative: https://canine.sh
Its built on top of Kubernetes, based on learnings I've had from previous experiences scaling infrastructure.
If you look at the markup PaaS (Heroku, Fly, Render) applies to IaaS (AWS, Hetzner), it's on the order of 5-10x. But not having that, and trying to stitch together random AWS services is a huge PITA for a medium sized engineering team (we've tried).
On top of all that, theres a whole host of benefits to being on kubernetes, namely, that you can install any helm package with one click, which Canine also manages.
A good example is Sentry -- even though it has an open source offering, almost everyone pays for the cloud version because its too scary to self host. With Canine, its a one click, and you get a sentry.your-domain.com to use for whatever you need.
Recently got a sponsorship from the Portainer team to allow me to dedicate way more time to this project, so hugely grateful to them for that.
I'd say the biggest difference is in the backend -- canine is built on top of kubernetes, which is what lets it leverage the rich ecosystem of tooling and packages. Kubernetes has a reputation for being difficult to use, and so Canine tries to be super opinionated, and follow a set of best practices.
I'd like to think at this point (about 2 years into development) we've gotten to a place where the end user doesn't even know they are using Kubernetes.
Microlandia, the brutally honest city builder. Posting this for a second time, because i’ve been working super hard on a steam release.
last month’s “what are you working on” thread impulsed me to upload this game to itch and 1 month later, i’ve got a small community, lots of feedback and iterations. It brought a whole new life to a project that was on the verge of abandoning.
I wonder if you simulate at individual level or group? Would be cool at individual level each one making decisions individually and see some emerging behavior.
Also how corruption emerges in gov etc
Also if no job maybe they could try uber/food delivery crappy jobs like that or start their own business.
Maybe also less money less likely to have kids? Would be nice to show how poverty helps or not population growth. If too poor might have no education and would make kids, if average citizen and can’t save money will avoid kids. That’s why at individual level simulation could find these emerging patterns. But probably too expensive computationally ?
> I wonder if you simulate at individual level or group? Would be cool at individual level each one making decisions individually and see some emerging behavior.
If you are referring to the citizens, yes, at individual level. However for traffic I'm using a sampling rate.
> Also if no job maybe they could try uber/food delivery crappy jobs like that or start their own business.
That's an awesome idea, I added it to my backlog :)
> less money less likely to have kids?
This is mega tricky, because it happens very differently across the world. Yes, can be expensive computationally that's why the city is so small (for now) but as I start to distribute the simulation into many cores, players with high core CPU will be able to choose a bigger city size :) I agree that individual level simulation is what makes it interesting and I plan to keep it like that.
That's something I haven't thought about, but I 100% agree that it should be possible! Very soon I want to introduce roads with bicycle lanes, that have less car bandwidth, that will make me refactor the traffic simulation and the idea of a bicycle-only road type will become possible.
In Venice (the Italian one) - cars, mopeds and bikes are all banned. Most trade goods are transported around by a man moving fast with a sack truck shouting 'Attencion!'
> Brutally honest? I hope it shows the huge amount of land needed for parking lots :P
Parking space simulation is coming soon. I feel I will completely miss the point if I leave that out. The idea is to have street parking (with configurable profit for the city) parking lots, and buildings with underground parking, that should conflict, of course, with metro lines.
Public transport is the next thing I wanna work on. Will start with buses, which I have an idea on how it will be implemented, but for metro I will want to start learning about how it can be simulated faithfully.
This weekend I have plans to start playing a lot Subway Builder (https://www.subwaybuilder.com) which I'm really excited about, and maybe get some books on the subject, in order to get it right
I'm a dev and also a private pilot. Currently I'm working on Pilot Kit: https://air.club/ , a mobile app born from my own frustration with the amount of tedious paperwork in aviation.
It's an all-in-one toolkit designed to automate the boring stuff so you can focus on flying. Core features include: automatic flight tracking that turns into a digital logbook entry, a full suite of E6B/conversion calculators, customizable checklists, and live weather decoding.
It’s definitely not a ForeFlight killer, but it's a passion project I'm hoping can be useful for other student and private pilots.
I'm currently building my own coding agent, VT Code. VT Code is a Rust-based terminal coding agent with semantic code intelligence via Tree-sitter (parsers for Rust, Python, JavaScript/TypeScript, Go, Java) and ast-grep (structural pattern matching and refactoring).
It supports multiple LLM providers: OpenAI, Anthropic, xAI, DeepSeek, Gemini, OpenRouter, Z.AI, Moonshot AI, all with automatic failover, prompt caching, and token-efficient context management. Configuration occurs entirely through vtcode.toml, sourcing constants from vtcode-core/src/config/constants.rs and model IDs from docs/models.json to ensure reproducibility and avoid hardcoding. [0], [1], [2]
Recently I've added Agent Client Protocol (ACP) integration. VT Code is now a fully compatible ACP agent, works with any ACP-clients: Zed (first-class support), Neovim, marimo notebook. [3]
Thank you! I'm glad that you find this project useful. VT Code my current passion on how agent coding works and how far I can push myself to build one (with AI-assisted). I will keep developing and improve it. Currently I'm planning to run Terminal-bench to see how VT Code performs.
This looks very exciting! I'm following it and I'll give it a go. Not that I'm unsatisfied with Claude Code for my amateur level, but it's clear incentives are not exactly aligned when using a tool from the token provider xD
I love that you've made it open source and that it's in Rust, thanks a lot for the work!
Thank you for your kind words. This is my own research into how coding agent works in practice, I love to explore the underlying technologies of how Claude Code, and Codex and coding agent works in general.
I choose Rust since I have some familiarity and experience with it, VT Code is of course, AI-assisted, I mainly use Codex to help me build it. Thank you again for checking it out, have a great day! : )
Kindly let me know how your experience about Zed integration, I have done the ACP integration and merge to upstream Agent Client Protocol spec from Zed. The integration experience is quite magical, honestly. It's working in Zed , though for tool calls I'm still improving https://github.com/agentclientprotocol/agent-client-protocol...
Thank you for your very kind-words. I love building and agentic coding is my current curiosity.
> I’m curious though, how significant do you think it is for the agent to have semantic access through Tree-sitter?
For this, I'm really not sure, but since the start of building VT Code. I just had this idea to use tree-sitter to assist the agent to have more (or faster/more precise) semantic understanding of the coding, instead of relying them to figure out themself. For me, naively I think this could help agent to have better language-specific and accurately decision about the workspace (context) that they are working. If not having tree-sitter, I think the agent could eventually figure out itself. For this aspect, I should be research more on this topic. In VT Code, I included 6 language: Go, Python, Rust, TypeScript, Swift... ) via rust-binding crates, mostly when you launch the vtcode agent on any workspace, It will show the main languages in the workspace right way.
> Also what model have you had the most success with ?
I'm having mainly limited-budget so I can only use OpenRouter and utilize its vast amount models support. So that I can prototype quickly, for different use-cases. For VT Code agent, I'm using mainly x-ai/grok-code-fast-1, in my experience, it most suit for building VT Code agent it self because of speeds, and versatile in function calling and have good instruction following. I also have good successes with x-ai/grok-4-fast. I have not tried claude-4.5-sonnet and gpt-5/gpt-5-codex though. I really love to run benchmarks for VT Code to see how it perform in real world coding task, I'm aiming for Aider polygot bench, terminal-bench and swe-bench-lite, it is in my plan for now in my GitHub issues.
For VT Code itself, I instruct it to strictly follow system-prompt, in which I take various inspiration from Anthropic, OpenAI and Devin guide/blogs on how to build coding agent. But, for a model-agnostic agent, the capability to support multi providers and multi models is a challenge. For this I think I need help. I'm fortunately to have support from open-source community suggesting me to use zig, I have had good success with it so far, for implement LLM calls and implement the /model picker.
Overall in my experience building VT, the most important aspect of effective coding agent is context engineering, like all big-lab has research. A good system prompt is also very important, but not context is everything. https://github.com/vinhnx/vtcode/blob/main/prompts/system.md
// Sorry, English is not my main language, so pardon the typo and grammar. Thank you!
- You can precisely tweak every shade/tint so you can incorporate your own brand colors. No AI or auto generation!
- It helps you build palettes that have simple to follow color contrast guarantees by design e.g. all grade 600 colors have 4.5:1 WCAG contrast (for body text) against all grade 50 colors, such as red-600 vs gray-50, or green-600 vs gray-50.
- There's export options for plain CSS, Tailwind, Figma, and Adobe.
- It uses HSLuv for the color picker, which makes it easier to explore accessible color combinations because only the lightness slider impacts the WCAG contrast. A lot of design tools still use HSL, where the WCAG contrast goes everywhere when you change any slider which makes finding contrasting colors much harder.
- Check out the included example open source palettes and what their hue, saturation and lightness curves look like to get some hints on designing your own palettes.
It's probably more for advanced users right now but I'm hoping to simplify it and add more handholding later.
Really open to any feedback, feature requests, and discussing challenges people have with creating accessible designs. :)
I've sorted the colors by luminance/lightness and added a gray swatch for comparison so can explore which color pairs pass WCAG contrast checks.
I haven't really gotten into colorblind safe colors like this yet where the colors mostly differ by hue and not luminance. Colorblind and non-colorblind people should be able to tell colors apart based on luminance difference i.e. luminance contrast. Hue perception is impacted by the several different kinds of color blindness so it's much trickier to find a set of colors that everyone can tell apart. This relates to the WCAG recommendation you don't rely on hue (contrast) to convey essential information (https://www.w3.org/WAI/WCAG21/Understanding/use-of-color.htm...).
The gray swatch above could be called colorblind safe for example because as long as you pick color pairs with enough luminance contrast between them, colorblind and non-colorblind people should be able to tell them apart. You could even vary the hue and saturation of each shade to make it really colorful, as a long as you don't change the luminance values the WCAG contrast between pairings should still pass.
I get that you say it is for advanced users, but I think a "how to use this" link with a video in it that explained a few things would probably open it up to a lot more users.
There's so much more to do with tools like this, and I'm really glad to see it.
- Drag the hue and saturation curves to customise the tints/shades of a color. Look at the UI mockup as you do this to make sure the tints/shades look good together.
- The color pairings used in the UI mockup all initially pass WCAG contrast checks but this can break if you tweak the lightness curve of a color. The mockup will show warning outlines if this happens. Click on a warning and it'll tell you which color pairs need to have their lightness values moved further apart to fix it.
- Once you're happy, use the export menu to use your colors in your CSS or Figma designs. You can use the mockup as a guide for which color pairs are accessible for body text, headings, button outlines and so on.
Does that make more sense? You really need to be on desktop as well because the mobile UI is more of a demo.
Thanks for the feedback! Yeah, I appreciate there's a lot of background here around color palette design, UI design, color spaces, and accessibility so I likely need something like a video or tutorial. Another route is to have the tool start in a less freeform mode that handholds you through the process more.
A general utility library in C99, that I reuse throughout my different projects! Atm working on the json module and some f32 linear algebra
https://github.com/romainaugier/libromano
I always learned programming and maths on my own so any advice is welcome!
The goal is to serve the laws in a format that is easy to cite, monitor, or machine-read. It should also have predictable URLs that can be inferred from the law’s name. It will also have side by side AI translations (marked as such).
I cite a lot of laws in my content and I want to automatically flag content for review when a specific paragraph of the law changes. I also want to automatically update my tax calculator when the values change.
Basically, a refresh of gestetze-im-internet.de and buzer.de.
That's really important I think. Tangentially, https://ecfr.gov/ is bizarrely one of the most impressive websites I have encountered. It's regulations not laws but it has so many ways to discover and link and learn and trace back to law. I'm not a lawyer but it's been a great resource for understanding and surprisingly pleasant to read. I found learning about the whole law and regulation machinery and provenance via eCFR pretty fascinating.
Dunno if other governments are this Byzantine in practice (our system seems to be like... manual integration of diff patches) but it's pretty interesting and I really appreciate the work that goes into these types of things.
I've been suffering from migraines for the last month, so have channeled my (non-migraine) time into a migraine tracker to try and find the root causes. All the tracking apps I tried all have nice complex forms, which is all well and good, unless...you are having a migraine.
Rough idea is easy to use voice mode to record data, then analyze unstructured data with AI later on.
I want to track all relevant life information, so what I'm eating, meds I'm taking, headache/nausea levels, etc.
Adding records is as easy as pressing record on my apple watch and speaking some kind of information. Uses Deepgram for voice transcription since it's the best transcription API I've found.
Will then send all information through to a LLM for analysis. It has a "chat with your data" page to ask questions and try and draw conclusions.
Main webapp is done, now working on packaging it into an iOS app so I can pull biometrics from Healthkit. Will then look into releasing it, either on github or possibly in the app store. It's admittedly mostly vibe coded, so not sure if it'll be something releasable, but we'll see...
As a fellow migraineur, I feel compelled to point out that the quest for triggers and root causes is probably never going to end. The way I see it, the migraine "bucket" slowly fills up, and the final trigger is simply the drop that makes it run over.
I can suggest the research papers by Markus Dahlem for some in depth modern takes on migraine.
It's definitely bucket-like for me, and I can attest meditation empties it. Whenever I stop meditating, mental busyness and subconscious anxiety slowly build up. Half hour a day is enough to keep it away. I just keep bringing my attention back to the breath, trying to feel into the physiological need to breathe (which is usually occluded or distorted by mental activity). Whenever I feel I am actively holding to some tension, I allow myself to release it. That's all in terms of instructions, and for me it works wonders. I look at it as the equivalent of flossing for the brain ;)
If it’s an iPhone app the new on device transcription api in ios26 works well and is very fast. You could also use the ondevice llm to clean up the transcription. Cheaper and more privacy friendly
For me, as for a lot of people, lack of sleep is the big one... if I build up 4+ hours of sleep debt over a week, I'm at risk. So anything you can do to make that easier to log, like integration with a sleep tracker, would be good.
Also, a plug for Oliver Sacks's Migraine which taught me a lot about migraine with aura.
My current project also revolves around using voice notes to log life events. I'd love to talk and see if we could exchange some ideas. My Gmail username is the same as my HN username.
Well, depending on the people you meet, and the roles you are in, any kind of social contact can be mentally draining, even if it is not directly obvious.
Note that even the anticipation of meeting people can be a mental load.
I'm building a local medical AI app for Mac, recently published on the App Store. https://apple.co/4mlYANu
It uses medgemma 4B for analyzing medical images and generating diagnostic insights and reports, ofc must be used by caution, its not for real diagnostics, can be something to have another view maybe.
Currently, it supports chat and report generation, but I'm stuck on what other features to add beyond these. Also experimenting with integrating the 27B model, even with 4bit quantization, looks better than 4b.
A reality-bending anomaly game where you are the anomaly: as you interact with items in your environment, you may notice some anomalies. Moving a cookie jar opens the fridge door. Closing the fridge door makes a painting on the wall shrink, and rotating that painting switches the lights on.
The idea is for the game to make logical sense, but make the player sound completely unhinged from reality "I need to put the toaster on top of the oven to make the lamp spin around, that way I can move the lamp across the room near the couch to unlock the next level"
That’s built on a dataset and paper I wrote called CommonForms, where I scraped CommonCrawl for hundreds of thousands of fillable form pages and used that as a training set:
Next step is training and releasing some DETRs, which I think will drive quality even higher. But the ultimate end goal is working on automatic form accessibility.
Continuing to work on a Low Power FM community radio station for the East San Fernando Valley of Los Angeles. We have started promoting and putting on local events and are trying to fund raise to build out the station. Raising money is hard! We did a big show in Burbank where several hundred people showed up but we only netted $800 after expenses. :(
Since this is hackernews, i'll add that i'm building the website and archiving system using haskell and htmx, but what is currently live is a temp static html site.
https://github.com/solomon-b/kpbj.fm
This is sick - I happen to run a site for DIY and community organizations like yours. We have proven the best way to fundraise is to throw events like you did but to upsell people on a recurring donation when they get the ticket.
On the off chance you are throwing another event, I would love to help you raise much more than $800 one time (my site is https://withfriends.events/)
Short answer is I would just recommend one of the tons of tax software out there specific to LLC, individual, 501c3. My site helps with the raising money part and just integrates and gives advice for that
This might be a naive question which you've probably been asked plenty of times before so I'm sorry of I'm being tedious here.
Is it really worth the effort and expense to have a real radio station these days? Wouldn't an online stream be just as effective if it was promoted well locally?
A few years ago a friend who was very much involved in a local community group which I was also somewhat interested in asked me if I wanted to help build a low power FM station. He asked me because I know something about radio since I was into ham radio etc.
I was skeptical that it was worth the effort. The nerdy part of me would have enjoyed doing it but I couldn't help thinking that an online stream would probably reach as many people without the hassle and expensive of a transmitter, antenna etc.
I know it's a toss up. Every car has an FM radio. Not everyone is going to have a phone plugged in to Android Auto or Apple Car Play and have a good data plan and have a solid connection.
I also pointed out that the technical effort is probably the small part compared to producing interesting content.
1. Radio is COOL. As a fellow ham I think you would agree with me on this one so I'll leave it at that.
2. Internet streaming gives you wider but far less localized audience. We will have an internet stream, but being radio first shifts the focus to local community and local content.
3. Internet streaming and radio have related but not entirely overlapping histories and contexts which impacts how people produce and consume their content. I love the traditional formats of radio and they are often completely missing in online radio which IMO models itself more often on mixtape and club DJ culture.
4. AI slop is ruining the world. I have this belief that as AI slop further conquers the internet we are going to get to a place where nobody trusts internet content. People will seek out novelty and authenticity (sort of how LLMs do lol) and I think there will be a return to local content and community.
5. Commercial radio sucks. The LPFM system is a wonderful opportunity to create a strong, community driven alternative to corporate media.
Radio is so much fun to learn. It’s liberating to learn for curiosity and joy rather than commercialization. The community is welcoming, and while not directly translatable for most paid work, it does teach general problem solving skills.
I’m working on Userdoc, a spec-driven development workspace.
Break down your software requirements (Userdoc guides you through the process), refine/confirm, setup your technical specs, coding/business guidelines & guardrails, and then create development plans (specs) which can be easily consumed by coding agents via MCP, or by platforms like Lovable / v0 using Markdown.
Working on Cursor background agent integration atm.
Here on Croatian islands maritime traffic disruptions and power outages happen often. Constantly checking websites or searching paper notifications stuck on random street lamp posts is a no-go and timely information is important.
I'm working on a mini-project which monitors official resources on the web and sends email notifications on time. Currently covering around 15000 inhabitants.
My side project - https://macrosforhumans.com - is a traditional mobile macro tracker with first class support for voice (and soon image and text blob) inputs for your recipes, ingredients, measurements, units, etc. Kind of a neat project that may never make it too far off the ground considering I am not a mobile dev but it's been fun to build so far with the help of claude code. It's built with flutter and a fastapi backend.
In the AI macro food logging world, there's really only Cal AI which estimates macros based on an image. I use cronometer personally, and it's super annoying to have to type everything in manually, so it makes sense why folks reach for something like Cal AI. However, the problem with something like Cal AI is accuracy. It's at best a guess based on the image. Macros for humans tries to be more of a traditional weigh your food, log it, etc kind of app, while updating the main interface for how users input that info into something more friendly.
I set myself a hard deadline to present a live demo at a local showcase/pitch event thing at the end of the month. I bet the procrastination will kick in hard enough to get the backend hosted with a proper database and a bit more UI polish running on my phone. :-)
Here's a really early demo video I recorded a few weeks ago. I had just spoken the recipe on the left and when I stop recording you can see my backend streams the objects out as they're parsed from the LLM https://www.youtube.com/watch?v=K4wElkvJR7I
I have been working on it for the last two years as a side project, but starting March will be my full time job! Kind of excited and scared at the same time
I've become a bit addicted to online education. I finished my first masters degree in Computer Science in July, and I started a masters in Mathematics from The Open University at the beginning of October. I've wanted to really get into the weeds of obscure and arguably-useless math for about as long as I can remember, and I figure that getting a masters in it is as good a way to get that knowledge as any way else.
Other than that, I've been doing a lot of fixing of tech debt in my home network from the last six years. I've admittedly kind of half-assed a lot of the work with my home router and my server and my NAS and I want these things to be done correctly. (In fairness to me, I didn't know what I was doing back when I started, and I'd like to think I know a fair bit better now).
For example, when I first built my server, I didn't know about ZFS datasets, so everything was on the main /tank mount. This works but there are advantages to having different settings for different parts of the RAID and as such I've been dividing stuff into datasets (which has the added advantage of "defragging" because this RAID has grown by several orders of magnitude and as a result some of the initial files were fragmented).
I’m building a small side project called https://www.localgeoguessr.com/
— a fun geography game that tests how well you know your local area. It’s still in a very early stage, not polished yet, but it’s somewhat playable.
The idea is to eventually add more categories like “restaurants,” “theaters,” “roads,” etc., so you can play based on local themes.
I’d love to hear your thoughts - any feedback on what you’d like to see, what feels off, or any issues you run into would be super helpful.
I thought your idea was really cool so I gave it a go.
I live about 20 minutes from a minor US city.
All but 1 prompts were in a 3-block radius IN the city (again, about 20 minutes from my town's town hall).
So the 1 prompt I didn't know I guessed the same 3 block radius as the others, and it was about 2 miles away. Still in the city, not the town I typed in.
It seems like smaller towns will be gobbled up by famous cities elements. Especially here in New England where the majority of 'famous' local things are so few.
edit: also, changing the 'radius' resets the city to where the website THINKS I am instead of where I typed in.
It's really fun! Thank you.
On the result screen, let me click on the locations so that I can learn more about them. Some museums I didn't know and would click immediately to learn more about them. Or even add a little explanation of what they are.
Love this. I chose a small village and got a few clues like "War memorial", I'd bet there are many War Memorials in the 10km radius - so was impossible to know which one it meant.
I'm working on Penteglot - a fork of Emacs's Eglot LSP client with multi-server support.
The main feature: you can run multiple language servers simultaneously for the same buffer.
One of the main reasons people stick with lsp-mode over Eglot has been the lack of multi-server support. Eglot is otherwise the most "emacsy" LSP client, so I'm working on filling that gap and I hope it could be merged into Emacs one day.
This is still WIP but I've been using it for a while for Python (basedpyright or pyrefly + ruff for linting) and TypeScript (ts-ls + eslint + tailwind language server).
I'm working on a open-source tool to create photo galleries from a folder of photos: https://simple.photo. It creates galleries as static sites that are easy to self-host.
I started this out of frustration that there is no good tool I could use to share photos from my travel and of my kids with friends and family. I wanted to have a beautiful web gallery that works on all devices, where I can add rich descriptions and that I could share with a simple link.
Turned out more people wanted this (got 200+ GitHub stars for the V1) so I recently released the V2 and I'm working on it with another dev. Down the road we plan a SaaS offer for people that don't want to fiddle with the CLI and self-host the gallery.
Like the layout tiles you have for the photo thumbnails. Will dig through and learn some css. Have struggled with different size content to create a compact masonry layout.
The CSS for this is indeed tricky. I figured out this layout 5 years ago in the v1 and forgot how it works, just took it over as it looks good. The key is that not all rows are exactly the same height. There are small differences that allow photos to fit horizontally.
I also tried the vertical masonry layout, which looks good, but makes no sense if your photos have a chronological order...
Working on an dedicated offline space. Screen device are stored to a locker. Serves coffee and beverages and light food. (There will be a small separate space for occasional screen/internet access in case of need)
I've been working on the idea for about a year now. I have put up the funds and set up the corporation. Been busy designing the menu, scouting an ideal location and finding the right front-end staff.
This sounds like my retirement plan: coffee shop + book store. I tend to buy old engineering and mathematics textbooks (my collection has suffered losses through multiple moves, unfortunately) and I find that these are typically overlooked at normal bookstores and even libraries.
I'm making a game that's inspired by the niche but adored "The Last of Us Factions", the multiplayer as part of the first Last of Us (only available on Playstation). I got a gaming PC a couple years ago and haven't been able to find anything quite like it.
Making it with the Rust game engine, Bevy and really enjoying it so far. Using Blender for making assets. I'm maybe a dumbass for making it as my first game, but I just don't really get excited by smaller projects.
Overall I've found modern games to be (1) overstimulating and (2) have algorithms in the background to keep me engaged that I don't trust (see: free to play model)
An open source website I built to explain tensor functions in PyTorch: https://whytorch.org
It makes tricky functions like torch.gather and torch.scatter more intuitive by showing element-level relationships between inputs and outputs.
For any function, you can click elements in the result to see where they came from, or elements in the inputs to see how they contribute to the result to see exactly how it contributes to the result. I found that visually tracing tensor operations clarifies indexing, slicing, and broadcasting in ways reading that the docs can't.
You can also jump straight to WhyTorch from the PyTorch docs pages by modifying the base URL directly.
I launched a week or two back and now have the top post of all time on r/pytorch, which has been pretty fun.
This really nice. For `torch.mul(x, y)`, it would be nice if it highlighted the entire row or column in the other matrix and result. Right now it shows only a single multiplication, which gives a misleading impression of how matrix multiply works. I wouldn't mention it, except that matrix multiplication is so important that it's worth showcasing. I've bookmarked the site and will share it at a pytorch training session I'm leading in a couple of weeks.
I'm working on a little website that helps me and my friends to decide easier what to play on a gamenight, because it always goes like this:
- I want to play x and y
- I want to play y and z
- I don't have z
- I don't really feel x
- Lets play b
- I'd rater play c
- Let's settle on d
- Today H is joining, he does not have d
It'll work in sessions where first everyone can suggest games, then in the second phase veto out suggestions, then vote and it'll display the games with the highest vote. You can also manage/import a list of your games and it'll show who owns what. It's geared towards video games, but will work for board games too. Hope to release it for everyone in the next weeks.
Sounds great. Would everyone "accept" the result, or would it be worth adding a little LLM explanation of why the result should content everyone by explaining how the game retains elements or this other voted game and that other voted game, to try and make people go "ok, sure"?
I'm not a huge gamer so maybe this is an obvious reaction they would get from their own experience when seeing the result without needing a LLM explanation.
Maybe far down the road. For now I'm fine with a minimal tool that does not aim to take out the human interaction in the group haha. If the tool finds a small community more features might be added like content-based recommendations, but I don't want to drive up the costs right now
I'm working on a DSL and browser-based playground for procedural 3D geometry called Geotoy: https://3d.ameo.design/geotoy
It's largely finished and functional, and I'm now focused on polish and adding additional builtin functions to expand its capabilities. I've been integrating different geometry libraries and kernels as well as writing some of my own.
I've been stress-testing it by building out different scenes from movies or little pieces of buildings on Google Maps street view - finding the sharp edges and missing pieces in the tool.
My hope is for Geotoy to be a relatively easy-to-learn tool and I've invested significantly in good docs, tutorials, and other resources. Now my goal is to ensure it's something worth using for other people.
I'm building a platform to help employees prepare for large high-value meetings and presentations with note taking and on-demand AI summarization and sentiment analysis to best prepare and deliver information. The main driver was first-hand experience with expensive team meetings that use time inefficiently and result in "circling back" or "taking it offline" to actually make decisions, which results in information silos and even more inefficient use of time.
The platform also supports HR for the organization by presenting in-depth anonymized data surrounding team interactions, exceptional individuals, and potential bottlenecks within the organization caused by qualitative issues. Aiming to launch by end of year and working with small businesses as free test users for feedback and validation.
Currently working on an open-source agent for privilege access management (PAM) and just-in-time access (JIT) to cloud infrastructure, SaaS applications and local systems. It's using serverless workflows (https://serverlessworkflow.io/) and https://www.temporal.io to guarantee robust deterministic workflow execution. Temporal is used to orchestrate elevations across environments and systems. It tasks “agents” to grant access where it needs to be rather than centralising permission stores. It guarantees execution and revocation of permissions. Run it locally for sudo, UAC. Or in the cloud for IAM or for individual applications. Check it out: https://github.com/thand-io/agent
Curious why you chose Temporal, which requires your users to either run an external coordination server or pay Temporal money for theirs? Did you look at DBOS (which doesn't required an external server and can just use your existing database)?
I think this is amazing. I work in the construction sector and there are so so so many small one-man tradesperson companies that need to know about this.
Nice! I recently built an invoice generator (not open sourced) for my own needs. I built mine because I needed something when I discontinued a SaaS that had provided it. Mine is written in C# and uses a JSON file to define the contents of the invoice. It's run from the command-line and just produces the PDF.
Are you planning to turn this into a full-fledged CRM of some sort? Are you planning to add user login with templates/company fields auto-populated at one point? Looks very clean, congrats.
Why would you do something like this instead of using a cheap script from a codecanyon-type website (a true CRUD crm) where you can collect customer data and provide complete service in the long run? Just saying this because you said you built it for your own use.
I actually hadn’t heard of Codecanyon before! I used to use a paid invoicing service, but these days I just need a simple way to generate invoice PDFs - that’s really all I need.
You can use invoice generators that have complete control over your customers. Most scripts are php, and if you want something very detailed I'd go with Perfex. Codecanyon is the biggest code marketplace on the internet, owned by Envato.
I wonder if you could just send invoices to Comcast for price increases to their Payable Accounts department and if they'd just pay them. Or just invoice companies for "inconvenience fees" of sorts when they actually create inconveniences.
I wanted to build my own speech-to-text transcription program [1] for Discord, similar to how zoom or google hangouts works. I built it so that I can record my group's DND sessions and build applications / tools for VTTs (Virtual TableTop gaming).
It can process a set of 3-hour audio files in ~20 mins.
Not sure what the market is for something like this but it's something I've been thinking a lot about since stepping down as CEO of my previous company.
My goal is two-fold:
1. Help teams make better, faster decisions with all context populating a source-of-truth.
2. Help leaders stay eyes-on, and circumstantially hands-on, without slowing everything down. What I'd hope to be an effective version of "Founder Mode".
If anybody wants to play around with it, here's a link to my staging environment:
Great idea! Great website! Terrible video. The 90 second format is great, this is how much I would like to spend learning what exactly your product does. But the whole video is just clicking some user interfaces with no result. After watching the video, I have even less idea of what it the product is for. I would love to see a video that goes through the "next, next, next" in the wizard and then shows the actual outcome.
Great feedback, I'll work on the video ASAP. I intended to immediately create a follow-up video that steps through each component of a newly created decision, got distracted, never circled back.
OK, it seems you are on the path of another 8 fig exit. Good on you. It seems like a great project and could possible save so much time if well executed and well integrated.
I've added it to SaaSHub saashub.com/orgtools. If you have an @orgtools.com email you can verify and improve the profile. Cheers!
This is a good nudge to choose the grammatically correct option, thank you.
I originally had "less meetings" before an LLM corrected me into using "fewer meetings". Then when talking about Orgtools to a couple people I heard them say "less meetings" and switched back thinking that sounds slightly more natural (but incorrect).
I am working on an AI-powered fitness and food tracker that automatically logs your food based on the photo. One of the difficulties I had when going to the gym is keeping up and sharing macros weekly with my personal trainer. Manually logging food is a hassle and massive pain point - so my app, Eat n Snap attempts to solve this problem. You can also set weight and BMI goals and see your progress on a weekly basis.
I suppose you must know this already if you have done research on alternatives, but there are already a plethora of apps like this — Lifesum, Cal AI, MacroFactor, just to name a few.
I think this is hard with only a photo because you can’t always see what’s inside. But I’ve always dreamed of something like this paired with some kind of affordable hardware scanner that can get just enough data to fill in the blanks from the photos.
This is already a feature in an app called MacroFactor. But there is definitely room for improvement in the field.
One thing that I miss in MacroFactor is that it should have some memory of my previous choice.
Example:
If I take a picture of a glass of milk, it always assumes it to be whole milk (3.5% fat). Then I change it to a low fat milk (0.5% fat). But no matter how many times I do that, it keeps assuming that the milk in the photo is whole milk.
Super silly but I'm searching for a mathematical backdoor in Bitcoin's secp and the secr curve. I saw that both curves use unsafe primes (p-1 factors pretty well) for the generator order.
So I'm trying to define a multiplication operation using primitive roots.
I am working on a microkernel for arm-m33 microcontrollers. Targeting the RP2350 first.
It’s going to feature a synchronous IPC model where the inter-task ‘call graph’ is known at compilation. Function call semantics to pass data between tasks. Call() recieve() reply()
A build tool that reads TOML will generate the kernel calls so that tasks can be totally isolated — all calls go though supervisor trap so we have true memory isolation.
Preemptions are possible but control is yielded only at IPC boundary so it’s not hard realtime.
So that makes things super robust and auditable behavior at compile time. Total isolation means tasks can crash catastrophically without affecting the rest of the system. Big downsides are huge increase in flash usage, constrained programming model, complex build system, task switching overhead. Just a very different model than what I’m used to at $dayjob.
I want to basically find out, hey what happens when we go full safety!? What’s hard about it? What tradeoffs do we need to make? And also kinda like what’s a different model for multitasking. Written in Rust of course.
I'm trying to fix an irritating problem by syncing my work calendar with a personal one, which would allow me to see all my events in my preferred calendar app. This project is still in the very early stages.
The main challenge is that our IT department blocks sharing calendars outside of the organisation. While this is primarily a solution for my own problem and likely not valuable to others, you could probably achieve the same result with tools like n8n or IFTTT.
I'm building FlightWise (https://flightwise.io), an all-in-one SaaS platform for flight school operations.
After acquiring a flight school, I quickly realized how challenging the day-to-day operations were. To solve the problems of aircraft fleet management, scheduling, and student course progress tracking, I developed a comprehensive platform that handles all aspects of running a flight school. Existing software is often outdated and expensive, offering poor value for its high cost. FlightWise was built off the real world experiences of my own school, where it has delivered immediate and invaluable benefits to our entire team, from students to administrative staff. We've just recently started to offer this platform publicly to other flight schools.
Thank you! Started development around August 2024 and 2 months later we had it in use at our school in a very early state, and then over time added more huge features, such as bookings, etc. About 4 months ago, we fully moved away from the existing antiquated platform we were paying for, as FlightWise reached a complete feature set.
Currently my biggest focus is my MUD Server I'm working on. Allows a developer to create a simple MUD game, (locations, items, combat), but all NPCs are actually just LLM controlled MUD clients.
Uses Server-Sent Events for the client + HTTP post for sending actions. Not a traditional direct TELNET style MUD server, but works well in the modern world.
Definitely not 100% hand-coded, probably only around 30% at this point, as I've had my original code refactored and expanded many times by now. It's taught me a lot about managing the agent in agentic-coding.
I'm working on a TUI for viewing OpenTelemetry traces locally to help me debug distributed applications that use OTEL. It's currently in its infancy, but I'm already able to get a little use out of it. https://github.com/FredrikAugust/otelly
Redesigning investment holdings for wider screens and leaning on hotwired turbo frames. Thankful for once-campfire as a reference for how to structure the backend. The lazy loading attribute works great with css media queries to display more on larger viewports.
Enjoying learning modern css in general. App uses tailwind, but did experiment with just css on the homepage. Letting the design emerge organically from using it daily, prototype with tailwind, then slim it back down with plain css.
Interesting challenge was designing for minimal distractions while keeping setup simple for parents. Timer-locked navigation so kids can see what's next but can't start other tasks or switch profiles. Also refactored from schedule-centric (nightmare to maintain) to task-definitions as first-class citizens, which made creating schedules way easier
React Native/Expo + Firebase. On the App Store after months of dogfooding with the family
My on again, off again life's work has been a foss dev stack for interactive tutoring systems. Something like a general purpose Math Academy, with mechanics to permit UGC and courses that are both adaptive (to the user's background and demonstrated skill) and inter-adaptive (to the userbase's expressed priorities).
I am using this stack now to build an early literacy app targeting kids aged 3-5ish at https://letterspractice.com (also pre-release state, although the email waitlist works I think!). LLM assisted edtech has a lot of promise, but I'm pretty confident I can get the unit cost for teaching someone to read down to 5 USD or less.
The docs seem to be highly targeted towards software engineers who want to build the system. There is scant information on how teachers would find this useful.
It's like inventing the refrigerator and all the brochure talk about is the internal engineering of the machine, rather than how keeping food cold is useful from the economic and culinary perspectives.
This is a fair point, but I'll emphasize that this is developer documentation and there isn't really an existing product or service targeting consumers, teachers or institutions.
My focus on that front is the LettersPractice app. I taught my own kids (6, 4) to read using early versions of the same software, and I'm pretty confident about the efficacy of the approach.
As far as the broader project moving toward being a consumer facing applications, there are a few options.
The existing platform-ui is a skeleton / concept sketch of one category. A web platform that allows users to create and subscribe to different courses, and then study sessions aggregate content from all subscribed courses. reddit for knowing stuff and having skills, rather than .
Another broad category is in NoCode ITSaaS (interactive tutoring system as a service?) platform. EG, a specialized bolt.new for EdTech that uses agentic workflows to create courses that cover a given domain or specific input documents (eg, textbooks, curriculum documents).
Whoops, sorry about that. I updated that submission right after writing this post, and re-published the front-end without having committed the changes.
In the last months I've been making a songbook with chordpro https://www.chordpro.org/, amazing CLI program that produces a PDF from text files.
I've been working on my own arrangements, putting chords in lyrics, and the program produces a page with the chord diagrams next to each song. ChordPro is a program that descends from a long lineage of programs that do this, but it's been actively under development in the last 3-4 years. The developer is quite nice, and attends bug reports.
Most recipes are a failure for beginners on the first try. I aim to make recipes bulletproof so anyone can pick up any recipe and it will just work.
The goal is to make the best recipe app ever. On a technical level recipes are built as graphs and assembled on demand. This makes multilanguage support easy, any recipe can use any unit imaginable, blind people could have custom recipe settings for their needs, search becomes OP, and there is also a wikipedia like database with information that links to all recipes. Because of the graphs; nutritional information, environmental impact, cost etc. can simply be calculated accurately by following linked graphs. Most recipe apps are very targeted to specific geographical regions and languages, this graph system removes a lot of barriers between countries and will also be a blessing to expats. Imagine an American in Europe that wish to use imperial units, english recipes, but with ingredients native to their new homeland. No problem, just follow a different set of nodes and the recipe is created that way for them.
The website is slightly outdated but gives a good idea of what is coming. Current goal is to do beta launch in 2026.
I admire the dedication and love to idea / how much you've thought it trough from the app / logic side.
From the marketing side...
I'd make a selection on the website on first visit
- I'm a chef / creator
- I like to cook
Your cta (call to action) is... Not very effective
Instagram only has 7 followers and no posts.
...
I like the dedication but I'd definitely recommend to improve your marketing / promotion skills (if you build it they will come is a myth unfortunately...), if you wanna have a call about it feel free to hit me up, tijlatduckdotcom. I'm also in Europe so easy for timing.
I'm playing around with sandboxing techniques on Mac so I can isolate AI tools and prevent them from interacting with files they shouldn't have access to -- like all my dotfiles, AWS credentials, and such.
Along the way I rolled my own git-multi-hook solution (https://github.com/webcoyote/git-multi-hook) to use git hooks for shellcheck-ing, ending files with blank lines, and avoid committing things that shouldn't be in source control.
Yes, I've used docker and podman. They're great. But I wanted to be able to run Xcode and IOS simulator, which requires macOS, so developed these solutions.
I'm working on a HubSpot marketplace app that will detect tasks and not create them if there are duplicates and workflows. Does anyone want to help me who's using HubSpot?
I've been vanlifing for a few months now. I tend to have long hours on the road where my mind wonders and I want to write code hands-free.
So, I built it.
Using ChatGPT's voice agents to generate Github issues tagging @claude to trigger Claude Code's Github Action, I created https://voicescri.pt that allows me to have discussions with the voice agent, having it create issues, pull requests, and logical diffs of the code generated all via voice, hands free, with my phone in my pocket.
Your van is probably better than mine, but when I was vanlifing with my wife, I really regretted spending so many long hours on the road. If I did it over again, I'd try to limit driving to a max of two hours per day and five hours per week. We spent far too much money on repairs and not nearly enough time writing code or exploring the places I drove through. Or past.
Are you reviewing code by voice, like a blind programmer? Have you tried Emacspeak? I know that's not normally hands-free.
I'm making an OpenAI API proxy to stay within a spending limit (like 1$ per hour then a 429), AFAIK they only support "budget alerts" and I'm not comfortable releasing anything without a hard limit on the spend. https://github.com/goverture/goxy - Still a work in progress, I plan to support streaming as well and might support other providers if there's a demand for it.
OpenAI gives an allotment of free daily tokens if you agree to hand over the inputs as training. I’d love a proxy that places the limit just before you exhaust those free tokens, to avoid incurring any expenses for small hobbyist projects.
This would be a game changer. There are so many times I run out of credits without knowing when it’s going! I know there is the dashboard, but I think it’s quite limited in my use cases.
Building https://check.supply: the easiest way to mail a real paper check from your iPhone. Link your bank, type the amount, and we print + mail it for you — with optional certified or express USPS tracking
I'm currently building an order queueing and sales recording web app for small coffee shops: SellerMate [https://sellermate.neilvan.com]
Made primarily for my friend's coffee shop. Data is stored locally, and the app is fully functional when offline. There is an optional "syncing" feature to sync your data with multiple devices which requires a sign up. This is a Progressive Web App built with Web Components. The syncing is made possible with PouchDB/CouchDB.
I still have to write (or screen record) a Getting Started guide but the app is ready for use nonetheless.
https://fooqux.com/ - an experimental tech article aggregator.
For several years now, I've had a routine of collecting articles on topics that interest me throughout the week and then reading them over the weekend. To help organize and streamline this process, I created this website.
The main idea is to gather tech articles in one place and process them with a LLM — categorize them, generate summaries, and try experimental features like annotations, questions, etc.
I hope this service might be useful to others as well. You can sign up with github account to submit your articles as well.
It's working well and I think I can use the same "backend" to pull this data into a spreadsheet which could be useful for data hungry users/coaches/club and event organizers/etc.
Building a tool that automatically generates living infrastructure diagrams from your IaC files and turns them into real-time incident dashboards. Think Figma meets Datadog - beautiful visualization that updates during outages to show you exactly what's failing and how to fix it.
The insight: your architecture diagram shouldn't be a stale PNG in Confluence. It should be your war room during incidents.
Going to be available as both web app and native desktop.
Working hard on Rad, which is aiming to be a Bash-replacement for writing CLI scripts. The goal is to allow users to write maintainable scripts with declarative argument parsing, built-in JSON processing, HTTP requests, and interactive
prompts - all in a familiar, readable syntax (Python-like!). Here's an example of the declarative approach to script args:
args:
username str # Required string
password str? # Optional string
token str? # Optional auth token
age int # Required integer
status str # Required string
username requires password // If username is provided, password must also be provided
token excludes password // Token and password cannot be used together
age range [18, 99] // Inclusive range from 18 to 99
status enum ["active", "inactive", "pending"]
Rad does all the arg parsing for you (unlike Bash), including validation for those constraints you wrote, and you can get on with writing the rest of your script is a nice, friendly syntax!
Very keen for feedback so if any of that sounds interesting, feel free to give it a go!
Building a hexapod, my first robot project ever, harder than I thought. I thought that kinematics will be the hardest thing…currently struggling with Euler Angles, Bezier's Curve and all those wonderful things
It currently supports complex heatmaps based on travel time (e.g. close to work + close to friends + far from police precincts), and has a browser extension to display your heatmap over popular listing sites like Zillow.
I'm thinking of making it into an API to allow websites to integrate with it directly.
Absolutely stellar! I've been looking for something like this for ages. Any chance you'll have some pre- defined options like grocery stores, libraries, airport, etc?
Living in hongkong for a few months, and absolutely love exploring the different neighborhoods. I’d love something like this or walkscore but for local guides to contribute.
Taking a break from tech to work on a luxury fashion brand with my mum. She hand paints all the designs. I it first collection is a set of silk scarves and we’re moving into skirts and jackets soon.
Been a wonderful journey to connect with my mum in this way. And also to make something physical that I can actually touch. Tech seems so…ephemeral at times
Wow, I balked at the price initially, but it actually seems cheap after learning they are hand painted, that's amazing. I can easily imagine there are people willing to pay a lot more for these.
Some earnest and unsolicited feedback on the website: the scroll-based transition is not really working well, looks very jumpy in Safari/MacOS, maybe interpolating between states will help smooth it out. Design-wise, the blur effect is quite jarring, and the product list screams Shopify store and not luxury brand. You already have pretty good photography, I'd feature the portraits heavily instead of the flat product shot. Invest in great typography.
this is super cool. congrats and best of luck with it! Love the mother & son backstory to the product. The scarves look like they could make a great gift as well. I'll bookmark your website.
It's an API that allows zero-knowledge proofs to be generated in a streaming fashion, meaning ZKPs that use way less RAM than normal.
The goal is to let people create ZKPs of any size on any device. ZKPs are very cool but have struggled to gain adoption due to the memory requirements. You usually need to pay for specialized hardware or massive server costs. Hoping to help fix the problem for devs
Fwiw: the website is brand new and very much in the "hot garbage" phase of development. I'm not a front-end guy, so critique is welcome from all - especially any bugs in the UX. I'm still actively uncovering them
It's meant to be a 'rails-like' experience in Go without too much magic and conventions.
Basically, speeding up development of fullstack apps in Go using templ, datastar, sqlc with an MVC architecture and some basic generators to quickly setup models, views and controllers.
I’m building SPARK (Signal Processing Algorithms, Routines, and Kernels), an open-source library of modular, efficient DSP components for low-power embedded audio systems.
The goal is to make it straightforward to design and deploy small, composable audio graphs that fit on MCUs and similar hardware. The project is in its infancy, so there’s plenty of room for experimentation and contributions.
Been working on MAKID as a solo side project the last few years. It’s an Ableton Live project manager that seamlessly integrates with your file system. http://makidapp.com/
It is a tool that lets you create whiteboard explainers.
You can prompt it with an idea or upload a document and it will create a video with illustrations and voiceover. All the design and animations are done by using AI apis, you dont need any design skills.
Here is a video explainer of the popular "Attention is all you need" paper.
I really like the idea! One issue though is that the content seems to "stream" much slower than what's being spoken. The result is that I'm sitting there waiting to see whats going to come, even though its already been said which makes it hard to focus on whatever new information is coming.
The animations / drawings themselves are solid too. I think there's more to play with wrt the dimensions and space of the background. It would be nice to see it zoom in and out for example.
Working on improving the data pipeline for https://iplocate.io - an IP intelligence service I've worked on since 2017.
Recent focus has been on geolocation accuracy, and in particular being able to share more data about why we say a resource is in a certain place.
Lots of folks seem to be interested in this data, and there's very little out there. Most other industry players don't talk about their methodology, and those that do aren't overly honest about how X or Y strategy actually leads to a given prediction, or the realistic scale or inaccuracies of a given strategy, and so on. So this is an area I'm very interested in at the moment and I'm confident we can do better in. And it's overall a fascinating data challenge!
It's a long running process, and the HW is mostly defined (but not laid out) but on pause while I work on porting TockOS to an ATSAMV71 to make sure I won't run into any project ending issues with the SW before I build the hardware.
I'm deploying a biological hardware solution to a regressed masonry event currently blocking ingress to a public channel.
The stoneware bitrot was legacy but eventually overwhelmed the architecture during an off-peak environment incident.
I'm tasked with fulfilling runtime dependencies to restore the wall framework, but had issues with build time mixing parameters not compiling well with the piecemeal building blocks.
I finally got it up and running through trial and error, though I sense a full rewrite will eventually be needed in the future.
I'm building a CLI that automatically generates and runs negative and boundary tests from OpenAPI Specs: https://github.com/dochia-dev/dochia-cli. It aims to reduce effort that engineers spent on this type of testing, either automatic or manual. But also making sure it comprehensively covers test scenarios which might not be considered by everyone.
I work with DSPy in Python and felt it was missing in the Ruby ecosystem.
So I started https://github.com/vicentereig/dspy.rb: a composable, type-safe version built for Rubyists who want to design and optimize prompts, and reuse LLM pipelines without leaving their language of choice. Working with DSPy::Signatures reminds me a bit of designing a db schema with an ORM.
It’s still early, but it already lets you define structured modules, instrument them in Langfuse, wire them up like functional components, and experiment with signature optimization. All in plain Ruby.
I'm still rebuilding OnlineOrNot's frontend to be powered by the public REST API. Uptime checks are now fully powered by a public API (still have heartbeat checks, maintenance windows, and status pages to go).
Doing this both as a means of dogfooding, and adding features to the REST API that I easily dumped into the private GraphQL API without thinking too hard. That, and after I finish the first milestone (uptime checks + heartbeat/cron job monitors), I'll be able to start building a proper terraform provider, and audit logs.
Basically at the start of the year I realised GraphQL has taken me as far as it can, and I should've gone with REST to start with.
The demo with an inline 8 at 16000 RPM is hard to judge, because I've never heard such an engine IRL. Might I suggest adding demos of engines people know the sound of?
I'm calling it a "Micro Functions as a Service" platform.
What it really is, is hosted Lua scripts that run in response to incoming HTTP requests to static URLs.
It's basically my version of the old https://webscript.io/ (that site is mostly the same as it was as long as you ignore the added SEO spam on the homepage). I used to subscribe to webscript and I'd been constantly missing it since it went away years ago, so I made my own.
I mostly just made this for myself, but since I'd put so much effort into it, I figure I'm going to try to put it out there and see if anyone wants to pay me to use it. Turns out there's a _lot_ of work that goes into abuse prevention when you're code from literally anyone on the internet, so it's not ready to actually take signups yet. But, there is a demo on the homepage.
I'm working on Botnet of Ares, a hacking simulator game for PC [0]. It's an homage to classics such as Uplink and Hacknet, and also a commentary on the state of the IoT security industry.
Recently I've managed to port the game onto a real-world cyberdeck, the uConsole. [1]
- A front-end library that generates 10kb single-html-file artifacts using a Reagent-like API and a ClojureScript-like language. https://github.com/chr15m/eucalypt
- Beat Maker, an online drum machine. I'm adding sample uploads now with a content accessible storage API on the server. https://dopeloop.ai/beat-maker
- Tinkering with Nostr as a decentralized backend for simple web apps.
In short, an explorable database of movies, TV shows, books and board games organised around the time and place that they're set. So if you're interested in stuff set during the French Revolution but not in Paris, you could find it there, for instance.
My team and I are building tools to streamline business processes and help businesses work faster, smarter, and more securely.
Currently we have two tools that are already being used by different companies.
The first is the Flowmono E-Sign tool; you can sign and send documents securely from anywhere, without printing or scanning, and it is relatively cheaper than any other E-sign platform.
And with Flowmono Workflow Automate, you can connect your tools and set up smart workflows that handle repetitive tasks for you, saving time and keeping your processes running smoothly.
Working on Maudit, a Rust library to make static websites. Emphasis on library instead of framework. I aim that you could integrate Maudit into existing Rust apps, building pages individually, rendering Markdown where you need etc, instead of a black box magic "build website" command.
I am working on an English learning app.
I combine flashcards like Anki with Duolingo-style motivation: leagues and streaks. Plus, integrated Giphy and a Telegram bot. Of course, we use the OpenAI API.
Also, as an idea for selecting words to learn, we parse movie subtitles with AI and find cool phrases and words to learn before watching a movie in English (as a second language). The app has Russian translations and UI.
My goal is to create the best online dictionary for English learners. We use a crowdsourcing approach where anyone can suggest a cool illustration for any word or phrase and add any word or idiom to learn.
Nice, I can see the appeal having familiar UI on Mac.
Even though I am not your target audience (linux i3 user myself), I would be interested in knowing how much "hacking" the macOS system is required to do this. Is it hard to get a list of running apps for your Task Bar? Is it hard to list the apps for the menu? How about keeping it all "on top" while other windows e.g. get maximized/minimized/full-screen, etc?
I could talk for days on all the peculiar bugs resolved. Once the alpha stabilizes I have drafts to publish on several topics.
You actually nailed the major pain points. Particularly window focus and state management. I've spent months solving this problem alone.
-
1. Applications data list: Getting the list is easy! Finding out which apps in that list are "real" apps isn't. Getting icons isn't. Reliably getting information on app state isn't. Finding out why something doesn't work right is as painful as can be. Doing all this in a performant way is a nightmare.
2. Applications menu renderer: Rendering the list for the menu is easy enough: the macOS app sends this data via socket. The frontend is just web sockets and web components under the hood (https://lit.dev). The difficult part was converting app icons to PNG, which is awfully slow. So a cache-warmup stage on startup finds all apps, converts their icons to png, and caches them to the app directory for read.
3. Window state: again, by far the worst and it isn't even close. Bugs galore. The biggest issue was overriding macOS core behavior on what a window is, when it's focused, and how to communicate its events reliably to the app. Although I did include a couple private APIs to achieve this, you can get pretty far by overriding Window class types in ways that I don't think were intended (lol). There is trickery required for the app to behave correctly: and the app is deceptively simple at a glance.
-
One bug, and realization, that still makes me chuckle today.. anything can be a window in macOS.
I'm writing this on Firefox now, and if I hover over a tab and a tooltip pops up - that's a window. So a fair amount of time has gone into determining _what_ these apps are doing and why. Then coming up with rules on determining when a window is likely to be a "real" window or not.
The Accessibility Inspector app comes standard on macOS and was helpful for debugging this, but it was a pain regardless.
Building a VsCode extension for drone coding. So you can easily write code to control a fleet of drones, deploy ai model and even setup training of new Reinforcement Learning models for drone behavior https://tensorfleet.net
I made https://www.copy.directory/ a few years back, now thinking about adding more features. It helps copywriters find just the exact word they need for their job. 0 AI features in it.
As a means to get into WebAssembly, I started writing a WebAssembly binary decoder (i.e. a parser for `.wasm` files) from scratch.
Recently I started executing the upstream spec tests against it, as a means to increase spec conformance. It's non-streaming, which is a non-starter for many use cases, but I'm hoping to provide a streaming API later down the road. Also, the errors interface are still very much WIP.
All that said, it's getting close to a fully-conformant one and it's been a really fun project.
I'm working on 1:6 size furniture. There's not much woodworking I can do outside of the shop, so I've been trying to shrink full joinery techniques down to dollhouse size.
For the past 2 months I've building an app called LogBuddy. I've recently completed the MVP and it helps me track my weight, my workout sessions, my food intake, and my periods, all in one app. It's really basic on purpose. This app also gave me the opportunity to go all in on mobile dev with Ionic.
Still working on my favicon fetching API: https://fetchfavicon.com. Currently adding comparison pages with other services. Also learning a lot of SEO and video editing for https://soulfulsabor.com, a food blog that I started with my wife.
A little computer vision library for embedded systems, by magnitudes smaller than OpenCV, but still practical enough to do feature tracking or cascade detections. Works well on ESP32 and cheap ARMs with low-resolution grayscale cameras.
Our waitlist is open for https://flatm8.co.uk - the platform for anonymous reviews of Landlords and Estate Agents in Britain and Ireland.
We’re working directly with partner housing unions and charities in Britain and Ireland to build the first central database of rogue landlords and estate agents. Users can search an address and see if it’s marked as rogue/dangerous by the local union, as well as whether you can expect to see your deposit returned, maintenance, communication - etc.
After renting for close to a decade, it’s the same old problems with no accountability. We wanted to change this, and empower tenants to share their experiences freely and easily with one another.
We’re launching in November, and I’m very excited to announce our partner organisations! We know this relies on a network effect to work, and we’re hoping to run it as a social venture. I welcome any feedback.
I’d love to know how it went for you and if there’s anything we can learn from your experiences - you’re right that it’s sorely needed! The statistics are getting worse and worse and worse… please feel free to email any thoughts or ideas based on your launch to team @ domain !
I got tired of trying to find a good MP3 player that just worked so I created a website to function as an online MP3 player. I started adding sources for content and ended up supporting YouTube, Spotify, Twitch, Instagram, Vimeo, SoundCloud, Rumble, WSHH, Facebook and X. So now you can create playlists from all of those sources with features you would find on any decent MP3 player such as loop, repeat, etc. I also drew inspiration from YTInstant and created a real time search for content that allows you to type lyrics and song titles and it will instantly find your content. Finally, I said well while I'm at it, I might as well just recreate MySpace, so I did that too. Let me know your thoughts. https://plasas.com
I’m creating an electronic avionics sensor and display for experimental aircraft. I’m having a fantastic time learning about circuits and MCUs (I have a pure CS degree, zero background with EE stuff). I’ve been working on this in my off hours for over a year now, maybe someday it will be a product that people buy!
The current challenge is the display. I’ve struggled to learn about this part more than any other. After studying DVI and LVDS, and after trying to figure out what MIPI/DSI is all about, I think parallel RGB is the path forward, so I’ve just designed a test PCB for that, and ordered it from JLCPCB’s PCBA service.
I'm working on a design system. I'm a software eng not a designer, but I started one a long while back because I wanted to get a sense of what designers go through. I've dropped it and came back a half dozen times but now I'm finishing it up.
It's been a great project to understand how design depends on a consistent narrative and purpose. At first I put together elements I thought looked good but nothing seemed to "work" and it's only when I took a step back and considered what the purpose and philosophy of the design was that it started to feel cohesive and intentional.
I'll never be a designer but I often do side projects outside my wheelhouse so I can build empathy for my teammates and better speak their language.
I'm attempting to work on a "spiritual successor" to Dramatica Story Expert, a crazy story theory/brainstorming program of days gone by. Technically, Dramatica is still around, but they never made a 64-bit version for Macs, and both the Mac and Windows version have been tenaciously clinging to the trailing edge of technology for decades. (The Mac version somehow never got retina fonts. I'm not sure how you even do that.)
I started my program in Swift and SwiftUI, although for various reasons I'm starting to look at Dart and Flutter (in part because being multiplatform would be beneficial, and in part because I am getting the distinct feeling this program is more ambitious than where SwiftUI is at currently). It isn't a direct port of Dramatica by any stretch, instead drawing on what I've learned writing my own novels, getting taught by master fiction writers, and being part of writing workshops. But no other program that I've seen uses Dramatica's neatest concepts, other than Subtxt, a web-based, AI-focused app which has recently been anointed Dramatica's official successor. (It's a neat concept, but it's very expensive compared to the original Dramatica or any other extant "fiction plotting" program. Also, there's a space for non-AI software here, I suspect: there are a lot of creatives who are adamantly opposed to it in any form whatsoever.)
This one's going to be out of left field, but last Thursday I launched Countdown Treasure (https://countdowntreasure.com)
It's a real life treasure hunt in the Blue Ridge Mountains with a current total prize of $31,200+ in gold coins and a growing side pot.
I modeled it off of last year's Project Skydrop (https://projectskydrop.com) which was in the Boston area.
* Shrinking search area (today, Day 5, it will be 160 miles, on Day 21 it'll be just 1 foot wide)
* 24/7 webcam trained on the jar of gold coins sitting on the forest floor just off a public hiking trail
* Premium upgrades ($10 from each upgrade goes towards the side pot) for aerial photos above the treasure and access to a private online community (and you get your daily clues earlier)
* $2 from each upgrade goes towards the goal of raising $20k for continued Hurricane Helene relief
So far the side pot is $6k and climbing.
It's been such a fun project to work on, but also a lot of work. Tons of moving parts and checking twice and three times to make sure you've scrubbed all the EXIF data, etc.
Thanks! Yeah, I unfortunately can't shell out a straight $20k donation, but I saved the $25k over the year from my e-commerce business and justified it as a marketing expense if worse case scenario happens and it's found earlier than the math predicts. But if we get past break even then can't wait to write that check to help out the communities around here that are still recovering.
Maybe some? But the circle today will be 160 miles wide or about the same width as Switzerland and Denmark. So I'm not sure how much shadows would help you pinpoint a specific location in an entire country worth of mountains.
Still slowly working away on my location intelligence data union..
I’ve spent a while understanding what sort of market would make it viable. I think it does actually work if you can square: 10K participants per major metro area, revenue of about 2.9M per metro area (so say, 5K monthly recurring with about 50 customers).
At that point you could pay data union participants about $5 a week to share their location data with you.
From talking to some previous data union folks, the major challenges are paying out (my target is much higher than any union managed), and people dropping out over time.
My bet is that these are both solvable things by selling data products rather than just bundles of data, and the data source being very passive.
I’m also interested in the idea that such a union should act more like a union than previous efforts in this space, by actively defending members’ data from brokers.
I’m still working on turning a wishlist app that I built for my friends into a real product — it’s called https://thingstohave.app. I wrote a comment about it in summer, and these are the updates:
1. I shared the app with the small audience I have and received some feedback in very unexpected places. First, it was hard to understand how lists work because putting things into lists was an unobvious process. I fixed that by adding DnD that works well both with mouse and touch (turned out it’s two separate APIs). Second, users thought that the screenshot on the quite minimal landing page was the real app, and they clicked on it. The problem was so frequent and surprising that I decided to add something funny for people who do that, as I’m not willing to contribute a lot of time to the landing right now.
2. I underestimated how bad discoverability on the internet is. My expectation was that I would make my site fully server-side rendered, add a basic sitemap to Search Console, and have a few dozen organic users during the pre-holiday season when users are filling their wishlists. In reality, I got zero — not even users, but even visits. So I started actually working on SEO, no black magic but just adding slightly more complex sitemaps, micro-markup, and other stuff which I thought only products competing for the first page would need.
My next steps are to work on getting some minimal organic inflow of users and improving stuff related to auth and user management, which is the most time-consuming part of the work right now.
Having migraines on and off the past few months, I wanted a way to try and narrow down triggers. All the existing apps out there were overly complicated. So I built something simpler.
It’s an iOS app to help tracking events and stats about my day as simple dots. How many cups of coffee? Did I take my supplements? How did I sleep? Did I have a migraine? Think of it like a digital bullet journal.
Then visualizing all those dots together helps me see patterns and correlations. It’s helped me cut down my occurrence of migraines significantly. I’m still just in the public beta phase but looking forward to a full release fairly soon.
Would love to hear more feedback on how to improve the app!
This is great! I can see this useful across a variety of self-assessment things:
- I’m tired often, are there certain patterns that align with that?
- I’m feeling anxious, what events in a day (or other inputs) align with that?
I've been working on a tool called Materia[0] for managing Podman Quadlets on hosts; I released a new version last month (and posted it on the September thread) and just merged automatic volume data migration the other day. Next goal is to design a system for downloading and loading remote components, similar to ansible roles. Hopefully I can tie it into the new podman quadlet install/etc commands.
I think app icons are an underrated artistic format, but they’ve only been used for product logos. I made 001 to explore the idea of turning them into an open-ended creative canvas. There are 99 “exhibit spaces” in the gallery, and artists can claim an exhibit to install art within. Visitors purchase limited-edition copies of pieces to display as the app’s icon, the art’s native format.
It’s a real-money marketplace too - the app makes money by taking commission of sales (Not crypto). I like economic simulation games and I think the constraints here could be interesting.
I’m currently looking for artists to exhibit in the gallery, if anyone is interested, or knows someone who may be, please let me know!
I'm working on Teletable (https://teletable.app), a macOS app that shows live football & F1 standings/results with a teletext interface (think BBC Ceefax). It's free and on the appstore:
Collecting public datasets for training visual AI models to track and target drones.
Drones are real bastards - there's a lot of startups working on anti drone systems and interceptors, but most of them are using synthetic data. The data I'm collecting is designed to augment the synthetic data, so anti drone systems are closer to field testing
A few months ago, I built a simple athlete profile page for my son (Track sprinting) to log his performance and progress over time.
He liked what I built for him and I got jealous, so I expanded it with my own profile (Trail running).
Then, I got curious… Could I build a full web platform for people to track their sporting life? I mean we have LinkedIn and CVs for our job career, why not celebrate all our sports/training efforts as well.
After a couple of months on the side, I'm pretty happy with Flexbase. If you're into sports, give it a try and let me know what's missing for you.
You can list the sports you're doing or did in your entire life, you can add your PRs, training routines, gear, competition results, photos. You can also list your clubs, and invite/follow your training buddies.
Honestly, I'm not sure where (or if) to expand it... Turn it into a Club-centric tool, make it more into a social network for sporty people.
Lots of ideas, but I'd love to find someone to work on it with me. I find that building alone is less fun.
I've been working to build some tools for detecting and monitoring lookalike domains - the kinds of things used in phishing / brand impersonation attacks.
My current prototype scans potential lookalikes for a target domain and then tracks DNS footprint over time. It's early, but functional - and makes it easier to understand if some lookalike domain is looking more "threat-y".
I've also been working on automating the processing of a parent-survey response for my kid's school using LLMs. The goal is to produce consistent summarization and statistics across multiple years and provide families with a clearer voice and helping staff and leadership at the school best understand what things have been working well (and where the school could improve).
Working on https://fileboost.dev –– A Ruby on Rails Active Storage plugin that is a plug and play gem for image transformation without making any code changes.
Being a Ruby on Rails consultant, I frequently see active storage transformation becoming a bottleneck for web servers by eating up resources and making them sweat.
I built Fileboost to solve this problem for my customers. I'd love any feedback.
I built https://invoicepad.app which is a free, completely in-browser tool for creating invoices, estimates, and quotes. Yes, similar apps have been posted here before, but none were built the way I envisioned, so I made my own. The key difference: all invoice data is stored in the URL hash, not the querystring. This is important because querystrings are sent to the server with every request, while hashes stay local to your browser. This means I can never see your invoice data, unlike other similar apps. The workflow is simple: use your browser's bookmark manager as your invoice filing system. Or if you want to keep it offline, just copy and paste invoice URLs into a text document for storage. I’ve also included helpful features like saved profiles to save on repeated data input. The next step is to finish working on a browser extension (v1 is being tested) to make bookmarking, editing, and saving changes even easier, that is if I ever stop being distracted by other side projects.
I am currently working on a small side-project focused on React Native apps to manage their version update and maintenance mode. - https://appcockpit.dev
Right now I am getting my first users and already getting great feedback. Many things on the roadmap.
Always eager to learn more about others pain points when it comes to React Native/mobile development. Let me know what you think!
- one man project (me)
- been doing it well over a year now
- no sponsorship, no investors, no backers, no nothing just my passion
- I haven't even advertised much, this may first ir second time I'm sharing a link
- On a weekdays im building a serious stuff with it
- On weekends preparing a new major version with lessons learned from doing a real project with it
Not going to stop. But I migh be seeking sponsors in future, not sure how that will turn out. If not that's ok, I'm cool to be only user.
Having worked for different startups for 10+ years and started 3 of my own (eventually failing), I always wanted a job board for local startups. Not necessarily IT-related jobs. Finally built it about a month ago: https://estonianstartupjobs.ee
There are few similar projects too, one is itself a startup which sadly on the verge of bankruptcy, and another aggregates only IT-related jobs.
I help privacy and data sovereignty enthusiasts take back control of their data without needing to change their habits.
I’ve been working for the past 3 years on SelfHostBlocks https://github.com/ibizaman/selfhostblocks, making self-hosting a viable and convenient alternative to the cloud for non technical people.
It is based on NixOS and provides a hand-picked groupware stack: user-facing there is Vaultwarden and Nextcloud (and a bunch more but those 2 are the most important IMO for non technical people as it covers most of one’s important data) and on the backend Authelia, LLDAP, Nginx, PostgreSQL, Prometheus, Grafana and some more. My know-how is in how to configure all this so they play nice together and to have backups, SSO, LDAP, reverse proxy, etc. integration. I’m using it daily as the house server, I’m my first customer after all. And beginning of 2025 it passed my own internal checkpoint to be shared with others and there’s a handful of technical users using it.
My goal is to work on this full time. I started a company to provide a white glove installation, configuration and maintenance of a server with SelfHostBlocks. Everything I’ll be doing will always be open source, same as the whole stack and the server is DIY and repair friendly. The continuous maintenance is provided with a subscription which includes customer support and training on the software stack as needed.
Financial institutions and governments don’t spot crime because of incomplete information at individual firms. We help them understand federated learning and how to effectively collaborate and not just talk about it. All code is open source, so you can always help out ;-)
I am working on a platform to help user to enrich their data by AI. so that AI can understand their Data more, especially for ChatGPT. Also it's easy to host a data and publish a MCP for ChatGPT.
The challenge is how ChatGPT can understand your "query" or say "prompts"? Raw data is not good enough - so I try to use a term called "AI Understanding Score" to measure it: https://senify.ai/ai-understanding-score. I think this index will help user to build more context so that AI can know more and answer with correct result.
This is very early work without every detail considered, really would like to have your feedback and suggestions.
I'm working on Veila, a privacy‑first AI chat service. I wanted something that prevents model providers from profiling users and linking information from chats to their identity.
I'm a robotics engineer by training, this is my first public launch of a web app.
- What it is:
- Anonymous AI chat via a privacy proxy (provider sees our server, not your IP or account info)
- End‑to‑end encrypted history, keys derived from password and never leave your device
- Pay‑as‑you‑go; switch models mid‑chat (OpenAI now; Claude, Gemini and others planned)
- Practical UX: sort chats into folders, Markdown, copyable code blocks, mobile‑friendly
- Notes/limits:
- Not self‑hosted: prompts go to third‑party APIs
- If you include identifying info, upstream sees it
- Prompts take a bit long sometimes, because reasoning is set to "medium" for now. Plan to make this adjustable in the future.
- Looking for feedback:
- What do you need to trust this? Open source? Independent audit?
- Gaps in the threat model I'm missing
- Which UI features and AI models you'd want next
- Any UX rough edges (esp. mobile)
- Learn more:
- Compare Veila to ChatGPT, Claude, Gemini, etc. (best viewed on desktop): https://veila.ai/docs/compare.html
- Discord: https://discord.gg/RcrbZ25ytb
- More background: https://veila.ai/about.html
Hmm. I can't say for others, but I can tell you what would work for me given that I might meet some the criteria of desired audience for this.
In this space, it is more about trust and what you have done in the past more than anything else. Audits and whatnot are nice, but I need to be able to trust that your decisions will be sound. Think how Steam's Gabe gained his reputation. Not exactly easy feat these days.
Thanks for sharing this! Fully agree that trust is key, normally being on the user side of privacy-focussed services myself. Open source can help build this trust, but it would be ideal to have a way to make what is actually running on and being served by the servers transparent.
I'd love to hear your feedback if you get around to test Veila, e.g. on hey@veila.ai.
I'm fiddling with index optimizations for the Marginalia Search index software, with being able to add ad-hoc domain filters in mind.
Not sure if there's more to say about it right now except that fuzz tests are good for this sort of low level programming with disk layouts involved. They drive up test execution time, but it's still almost hard to build them too early or have too many of them, as there's almost always an unimaginable number of permutations of weird corner cases that are hard to get at with regards to block boundaries and so on that are hard to identify based on staring at the code and doing classic unit tests.
I’ve been working on DB Pro — a modern desktop database workbench built with Electron, React, and Drizzle ORM. It’s designed to feel fast, cohesive, and genuinely enjoyable to use — something that sits somewhere between TablePlus, Notion, and VS Code.
Right now it connects to local and remote databases like SQLite and Postgres, lets you browse schemas and tables instantly, edit data inline, and create or modify tables visually. You can save and run queries, generate SQL using AI, and import or export data as CSV or JSON. There’s also a fully offline local mode that works great for prototyping and development.
One of the more unique aspects is that DB Pro lets you download and run a local LLM for AI-assisted querying, so nothing ever leaves your machine. You can also plug in your own cloud API key if you prefer. The idea is to make AI genuinely useful in a database context — helping you explore data and write queries safely, not replacing you.
The next big feature is a Visual Query Builder with JOIN support that keeps the Visual, SQL, and AI modes in sync. After that, I’m working on dashboards, workflow automation, and team collaboration — things like running scripts when data changes or sharing queries across a workspace.
The goal is to make DB Pro the most intuitive way to explore, query, and manage data — without the usual enterprise clutter. It’s still early, but it’s already feeling like the tool I always wanted to exist.
Would love to hear feedback, especially from people who spend a lot of time in database clients — what’s still missing or frustrating in the current landscape?
Been reversing Sound Blaster Command so that I could control my external DAC/AMP without Windows. So far I can change the LED color and EQ presets, which was the main reason I wanted to do it in the first place. I am currently in the process of writing a GUI for it so that others can use it too (I only tested it with 1 soundcard, G6, though) for their older soundblaster cards that are not supported by Creative's multiplatform solutions. Will use Clay for it. Initially wanted to use Qt but I wrote the implementation in C and now I am too lazy to adapt it to cpp.
Lately, I've been hacking on improving its linear algebra support (as that's one of the key focuses I want - native matrix/vector types and easy math with them), which has also helped flush out a bunch of codegen bugs. When that gets tedious, I've also been working on general syntax ergonomics and fixing correctness bugs, with a view to self-hosting in the future.
I’m building Skim: https://www.justskim.in/,
A PWA that lets you read books as auto-swiping, short-form content on mobile. I use it to replace watching YouTube Shorts or Instagram with reading in the same form factor. It works offline and is entirely client-side.
This weekend I’m working on making the parsing more robust. The most common friction I’ve heard is that downloading books elsewhere and importing them into the app is distracting. I’m torn between expanding it to include a peer-to-peer book exchange or turning it into an RSS feed reader.
I'm putting a bunch of security tools / data feeds together as a service. The goal is to help teams and individuals run scans/analysis/security project management for "freemium" (certain number of scans/projects for free each month, haven't locked in on how it'll pan out fully $$ wise).
I want to help lower the technical hurdles to running and maintaining security tools for teams and individuals. There are a ton of great open source tools out there, most people either don't know or don't have the time to do a technical deep dive into each. So I'm adding utilities and tools by the day to the platform.
Likewise, there's a built in expert platform for you to get help on your security problems built into the system. (Currently an expert team consisting of [me]). Longer term, I'm working on some AI plugins to help alert on CVEs custom to you, generate automated scans, and some other fun stuff.
* LLMs are accessible where telegram is accessible
* Multitude of models to choose from (chatgpt, claude, gemini) and more is coming.
* Full control over the bot behaviour is in user's hands: I don't add any system messages or temperature/top_p. I give UI for full control over system messages, temperature, top_p, thinking, web searching/scrapping and more to come.
* Q/A like context handling. Context is not carried through the whole bot, it's rather carried through chaing of replies. Naturally could be branched or use various models cross messages.
--
This is my hobby project and one of main tools for working with LLMs, thus I'm going to stick to it for quite a while.
Came from my frustration with Google Maps in Germany constantly having take-down requests for bad reviews and ratings. To get around this, we only list places we recommend.
Currently building a suite of media inspection and encoding tools for video engineers: https://video-commander.com.
Still a work in progress, but expecting to release by end of year. Built on Rust + Tauri, in case anyone is curious.
I've created various open-source and commercial tools in the multimedia space over the last 10+ years and wanted to put it all together into something more premium.
I'm currently working on two passions of mine. Both of them a one-man project.
The first is a DNS blocker called Quietnet - https://quietnet.app. Its born out of my interest in infrastructure and I wanted to build an opininated DNS blocker that helps mom and pops be safer on the Internet. At the end of the day its just the typical Pi-hole on the Cloud but with my personal interest in providing stronger privacy for our users while keeping their families safe.
The second, is a small newsletter aggregator tool called Newsletters.love - https://newsletters.love/.
I wanted to create a way for people to start curating their own list of newsletters and then sharing them with their friends and families. The service helps to generate a private email adddress that they can use to subscribe to newsletters and then start reading those newsletters whenever they want without it getting lost in their email inbox.
I’ve got a side project going that’s a browser extension (starting with Safari + Sign in with Apple) intended to add a comment layer to the internet as a whole. I’m calling it Chaffiti (https://chaffiti.com).
The idea is to enable a comment section on any webpage, right as you’re browsing. Viewing a Zillow listing? See what people are excited about with the property. Wonder what people think about a tourist attraction? It’ll be right there. Want to leave your referral or promo code on a checkout page for others? Post it.
Not sure what the business model will look like just yet. Just the kind of thing I wish existed compared to needing to venture out to a third party (traditional social media / forums etc) to see others’ thoughts on something I’m viewing online. I welcome any feedback!
Great idea but wouldn’t you run into storage issues pretty quickly without massive budget for large database clusters? The web is a big and constantly changing place. Covering it in useful comments seems prohibitively expensive.
• implemented adaptive quadrature with Newton–Cotes formulas
• wrote a tiny Markov-chain text generator
• prototyped an interactive pipeline system for non-normalized relational data in Lua by abusing operator overloading
• load-tested and taste-tested primary batteries at loads exceeding those in the datasheet; numerically simulated a programmable load circuit for automating the load testing
• measured the frequency of subroutine calls and leaf subroutine calls in several programs with Valgrind
• wrote a completely unhealthy quantity of commentary on HN
New ideas I'm thinking about include backward-compatible representations of soft newlines in plain ASCII text, multitouch calculators supporting programming by demonstration, virtual machines for perfectly reproducible computations, TCES energy storage for household applications beyond climate control such as cooking and laundry, canceling the harmonic poles of recursive comb filters with zeroes in the nonrecursive combs of a Hogenauer filter, differential planetary transmissions for compact extreme reductions similar to a cycloidal drive, rapid ECM punching in aluminum foil, air levigation of grog, ultra-cheap passive solar thermal collectors, etc. Happy to go into more detail if any of these sound interesting.
I am working on Tailstream (https://tailstream.io/), turning logs into task time visual data streams. Built the web application, web site and a Go CLI agent (open source) and am now slightly pivoting into making it more log-focused.
Working on faceted search for logs and CLI client now and trying to share my progress on X.
The glamourous world of data testing! A lightweight flexible data contracts library called Wimsey[0].
The main pitch is you have minimal dependencies and overheads and can run tests natively on pandas/polars/pyspark/dask/duckdb/etc (thanks to the awesome Narwhals project)
It's mostly there for v1 right now, but kean to add a tiny bit more functionality, and well a lot more docs. Working on something that's automated alongside the test suite, which should keep things reliable and fresh (I'll find out soon enough)
Basically the title explains it, I challenged myself to making a chrome extension a day for a month. I've been posting my progress on reddit, and my first two extensions have just been accepted to the chrome store (I'm only done day 3 so far, those were quick reviews!). For those interested:
Day 1: Minimal Twitter
Day 2: No Google AI Overview in Google Search
Day 3: No Images Reddit (Not Published, yet!)
I'm posting daily, I would love to hear thoughts on the extensions!!
Adding new transports and documentation to my Typescript logging library (MIT licensed), LogLayer (https://loglayer.dev). Just added documentation for Bun and Deno support added some new logging library transports (LogTape), and finishing up Logflare and Betterstack transports so you can send logs to their logging APIs.
We’re working on Fibre - secure file uploads for Intercom/Crisp, and uploads sent straight to your storage.
I noticed a gap - our customers are required to upload sensitive documents but often hesitate at the thought of uploading documents in the intercom/crisp interface, citing privacy concerns.
I thought, how difficult would it be to build an app that sends documents to your own Google drive - turns out it’s very easy. In a week, we built an app that renders an iframe in the intercom chat interface and sends documents straight to our google drive folder, bypassing intercom all together.
We’re now investigating uploading to s3 or azure blob storage and generating summaries of documents that are sent to the intercom conversation thread so ops teams can triage quicker.
I am working on Daestro[0], which is a cloud agnostic job orchestrator with built-in support for AWS, Vultr, DigitalOcean and Linode to run jobs on. Daestro can spawn and terminate compute instances based on requirement. It is suitable for running batch jobs or data engineering related jobs.
Self-hosted compute can also be linked to Daestro to run jobs on.
It's basically a reverse-proxy-as-a-service. I handle TLS termination and cert management, offer routing rules, rate limiting, WAF + DDOS protection, proxy + web analytics, redirects etc. All accessible via very simple API.
Underneath it's Caddy hosted on AWS for proxy fleets, and Heroku for Web + API fleets.
An experimental mesh network protocol, that is still very much pre alpha and missing some features.
The big thing I wanted to try is automatic global routing via MQTT.
Everything is globally routable. You can roam around between gateway nodes, as long as all the gateways are on the same MQTT server.
And there's a JavaScript implementation that connects directly to MQTT. So you can make a sensor, go to the web app, type the sensor's channel key, and see the data, without needing to create any accounts or activate or provision anything.
This weekend I added a broken link monitor to https://SecurityBot.dev. It will scan a site and flag 400, 403, 404, and 500 HTTP status codes. Screenshot:
A common lisp (sbcl) roguelike, to (re)learn Emacs and CL for fun. Using croatoan for ncurses, though if I had found BearLibTerminal earlier I might have used that.
I'm working on a DnD character sheet app! I spent last week implementing the core DnD SRD ruleset, but what I'm really excited about is ML integration. I want to add a self-hosted fine-tuned ML model that acts as a character and DM assistant. Obviously an LLM via API can do the job, but I'm really curious if it's possible to build smaller, cheaper, task-specific models. Plus, I've never integrated an ML model into a product before, and I'm curious to play with it. I'm thinking of it like clippy for DnD: "it looks like you're trying to cast fireball?"
Besides the LLM experimentation, this project has allowed me to dive into interesting new tech stacks. I'm working in Hono on Bun, writing server-side components in JSX and then updating the UI via htmx. I'm really happy with how it's coming together so far!
If you zoom out it's meant to look something like a thermal vent with cellular life. Rank and karma cause the cells to bio-illuminate. Each cell is a submission, each organelle is a comment thread, and every shape represents a live connection to the Firebase HN API. It also has features to search, filter, and go back in time as far as the backend has been running.
It's been a passion project of mine. My little Temple OS. And I'll keep adding little features that please me.
I am working on Lunch Flow (https://lunchflow.app), a tool that allows people to automatically sync their bank accounts to their favorite budgeting apps (Google Sheets, Lunch Money, Actual Budget, or use our API!)
I was motivated to build this as I found that many great personal finance and budget apps didn't offer integrations with the banks I used, which is understandable given the complexity and costs involved, so I wanted to tackle this problem and help build the missing open banking layer for personal finance apps, with very low costs (a few dollars a month) and a very simple api, or built-in integrations.
Still working on making this sustainable, but been quite a learning experience so far, and quite excited to see it already making a difference for so many people :)
For fun, playing with Meshtastic https://meshtastic.org/ and contributing to the open source firmware and apps. They have something cool but need lots of help. I've patched 3 memory leaks and had a few other PRs merged already.
For work, https://heyoncall.com/ as the best tool for on-call alerting, website monitoring, cron job monitoring, especially for small teams and solo founders.
I guess they both fall under the category of "how do you build reliable systems out of unreliable distributed components" :)
I’ve noticed that a lot of work is duplicated across projects that use the same libraries or SDKs (e.g., Stripe). Developers write a lot of glue code to shuffle data between the Stripe API and the app’s frontend or admin dashboard, as well as to handle incoming webhooks and persist data to the app’s database.
That’s why I’ve been building 'Fragno', a framework for creating full-stack libraries. It allows library authors to define backend routes and provides reactive primitives for building frontend logic around those routes. All of this integrates seamlessly into the user’s application.
With this approach, providers like Stripe can greatly improve the developer experience and integration speed for their users.
- Getting into RTL SDR, ordered a dongle, should be fun, want to build a grid people can plug into
- Bringing live transcripts, search and AI to wisprnote
- Moving BrowserBox to a binary release distribution channel for IP enforcement and ease of installation. Public repo will no longer be updated except for docs/version/base install script, and all dev happens in internal with binaries released to https://github.com/BrowserBox/BrowserBox. Too many "companies" (even "legit", large ones) abusing ancient forks and stealing our commercial updates without license, or violating previous permissive's conditions like AGPL source provision. Business lesson is even commercial licensed source-available eats into sales pipeline due to violators who could pay but assume false impunity and steal "freebies" "because they can." No perfect protection, but from now enforcement will ramp up, and source access is only for minimum ACV customers as add-on. So many enhancements coming down the pipe so it's gonna be many improved versions from here
- Creating an improved keyboard for iOS swipe typing, I don't like the settings or word choices in ambiguity and think it can be better
- What: Sun Grid Engine–style scheduler + Docker on System-on-Module (SoM) boards for reproducible tests/benchmarks and interactive SSH sessions (remote dev).
- Who: Robotics/embedded engineers comparing SoMs and tuning models/pipelines on target platforms.
- Why: Reproducible runs, easy board access, comparable reports.
Pulled this side project off the shelf — something I started after covid, when I was working at one of the consumer robotics companies (used to be the largest back then). Got it mostly working, but never actually released. I tend to dust it off and push it along a bit whenever I’m between jobs. Like now...
Feels good to be back at it.
Spent last week at a Java conference, and it made me realise that I haven't made many open source contributions of late. So I'm currently going through the issue trackers of the projects I rely on the most to see where I can pitch in.
I’m currently building YTVidHub—a tool that focuses on solving a very specific, repetitive workflow pain for researchers and content analysts.
The Pain Point: If you are analyzing a large YouTube channel (e.g., for language study, competitive analysis, or data modeling), you often need the subtitle files for 50, 100, or more videos. The current process is agonizing: copy-paste URL, click, download, repeat dozens of times. It's a massive time sink.
My Solution: YTVidHub is designed around bulk processing. The core feature is a clean interface where you can paste dozens of YouTube URLs at once, and the system intelligently extracts all available subtitles (including auto-generated ones) and packages them into a single, organized ZIP file for one-click download.
Target Users: Academic researchers needing data sets, content creators doing competitive keyword analysis, and language learners building large vocabulary corpora.
The architecture challenge right now is optimizing the backend queuing system for high-volume, concurrent requests to ensure we can handle large batches quickly and reliably without hitting rate limits.
It's still pre-launch, but I'd love any feedback on this specific problem space. Is this a pain point you've encountered? What's your current workaround?
How coincidental - I needed exactly this just a couple days ago. I ended up vibecoding a script to feed an individual URL into yt-dlp then pipe the downloaded audio through Whisper - not quite the same thing as it's not downloading the _actual_ subtitles but rather generating its own transcription, but similar. I've only run it on a single video to test, but it seemed to work satisfactorily.
I haven't upgraded to bulk processing yet, but I imagine I'd look for some API to get "all URLs for a channel" and then process them in parallel.
That is some fantastic validation, thank you! It’s cool to hear you already vibecoded a solution for this.
You've basically hit on the two main challenges:
Transcription Quality vs. Official Subtitles: The Whisper approach is brilliant for videos without captions, but the downside is potential errors, especially with specialized terminology. YTVidHub's core differentiator is leveraging the official (manual or auto-generated) captions provided by YouTube. When accuracy is crucial (like for research), getting that clean, time-synced file is essential.
The Bulk Challenge (Channel/Playlist Harvesting): You're spot on. We were just discussing that getting a full list of URLs for a channel is the biggest hurdle against API limits.
You actually mentioned the perfect workaround! We tap into that exact yt-dlp capability—passing the channel or playlist link to internally get all the video IDs. That's the most reliable way to create a large batch request. We then take that list of IDs and feed them into our own optimized, parallel extraction system to pull the subtitles only.
It's tricky to keep that pipeline stable against YouTube’s front-end changes, but using that list/channel parsing capability is definitely the right architectural starting point for handling bulk requests gracefully.
Quick question for you: For your analysis, is the SRT timestamp structure important (e.g., for aligning data), or would a plain TXT file suffice? We're optimizing the output options now and your use case is highly relevant.
Good luck with your script development! Let me know if you run into any other interesting architectural issues.
I've built something similar before for my own use cases and one thing I'd push back on are official subtitles. Basically no video I care about has ever had "official" subtitles and the auto generated subtitles are significantly worse than what you get by piping content through an LLM. I used Gemini because it was the cheapest option and still did very well.
The biggest challenge with this approach is that you probably need to pass extra context to LLMs depending on the content. If you are researching a niche topic, there will be lots of mistakes if the audio isn't if high quality because that knowledge isn't in the LLM weights.
Another challenge is that I often wanted to extract content from live streams, but they are very long with lots of pauses, so I needed to do some cutting and processing on the audio clips.
In the app I built I would feed an RSS feed of video subscriptions in, and at the other end a fully built website with summaries, analysis, and transcriptions comes out that is automatically updated based on the youtube subscription rss feed.
This is amazing feedback, thanks for sharing your deep experience with this problem space. You've clearly pushed past the 'download' step into true content analysis.
You've raised two absolutely critical architectural points that we're wrestling with:
Official Subtitles vs. LLM Transcription: You are 100% correct about auto-generated subs being junk. We view official subtitles as the "trusted baseline" when available (especially for major educational channels), but your experience with Gemini confirms that an optimized LLM-based transcription module is non-negotiable for niche, high-value content. We're planning to introduce an optional, higher-accuracy LLM-powered transcription feature to handle those cases where the official subs don't exist, specifically addressing the need to inject custom context (e.g., topic keywords) to improve accuracy on technical jargon.
The Automation Pipeline (RSS/RAG): This is the future. Your RSS-to-Website pipeline is exactly what turns a utility into a Research Engine. We want YTVidHub to be the first mile of that process. The challenges you mentioned—pre-processing long live stream audio—is exactly why our parallel processing architecture needs to be robust enough to handle the audio extraction and cleaning before the LLM call.
I'd be genuinely interested in learning more about your approach to pre-processing the live stream audio to remove pauses and dead air—that’s a huge performance bottleneck we’re trying to optimize. Any high-level insights you can share would be highly appreciated!
For the long videos I just relied in ffmpeg to remove silence. It has lots of options for it, but you may need to fiddle with the parameters to make it work. I ended up with something like:
I did consider building a tool like this before I pivot to something else. I'm learning materials in Chinese Mandarin language from a YouTube playlist. NotebookLLM doesn't support Chinese language yet so you must make sure your app supports Chinese Mandarin so I can use it. :)
A way to find specific materials would be nice. Think of converting the whole playlist into something like RAG then you can search anything from this playlist.
Wow, thanks for this validation! Hearing from someone who almost built the solution themselves confirms we’re on the right track.
You hit the nail on the head regarding language support.
Mandarin/Multilingual Support: Absolutely, supporting a wide range of languages—especially Mandarin—is a top priority. Since we focus on extracting the official subtitles provided by YouTube, the language support is inherently tied to what the YouTube platform offers. We just need to ensure our system correctly parses and handles those specific Unicode character sets on the backend. We'll make sure CJK (Chinese, Japanese, Korean) languages are handled cleanly from Day 1.
The RAG/Semantic Search Idea: That is an excellent feature suggestion and exactly where I see the tool evolving! Instead of just giving the user a zip file of raw data, the true value is transforming that data into a searchable corpus. The idea of using RAG to search across an entire playlist/channel transcript is something we're actively exploring as a roadmap feature, turning the tool from a downloader into a Research Assistant.
Thanks for the use case and the specific requirements! It helps us prioritize the architecture.
> Since we focus on extracting the official subtitles provided by YouTube, the language support is inherently tied to what the YouTube platform offers.
You can use video understanding from Gemini LLM models to extract subtitles even the video doesn't have official subtitles. That's expensive for sure. But you should provide this option to willing users. I think.
That is a fantastic point, and you've perfectly articulated the core trade-off we're facing: Accuracy vs. Cost.
You are 100% right. For the serious user (researcher, data analyst, etc.) the lack of an official subtitle is a non-starter. Relying solely on official captions severely limits the available corpus.
The suggestion to use powerful models like Gemini for high-accuracy, custom transcription is excellent, but as you noted, the costs can spiral quickly, especially with bulk processing of long videos.
Here is where we are leaning for the business model:
We are committed to keeping the Bulk Download of all YouTube-provided subtitles free, but we must implement a fair-use limit on the number of requests per user to manage the substantial bandwidth and processing costs.
We plan to introduce a "Pro Transcription" tier for those high-value, high-volume use cases. This premium tier would cover:
Unlimited/High-Volume Bulk Requests.
LLM-Powered Transcription: Access to the high-accuracy models (like the ones you mentioned) with custom context injection, bypassing the "no official subs" problem entirely—and covering the heavy processing costs.
We are currently doing market research on fair pricing for the Pro tier. Your input helps us frame the value proposition immesnely. Thank you for pushing us on this critical commercial decision!
Metacognitive AI system. The focus here is on the various internal systems versus the LLM itself. Basically giving an AI agent the ability to do all the things it cant do from a one step turn interaction that you usually see from just a chat bot. It is comprised of many specialized LLM's that have all their own roles and specialties. They will have the ability to cross talk with each other internally share post processed information make analysis and all that jazz, but not to do tasks but to reason in a similar manner to how a human reasons. Think of it as many thought traces that are giving advice to the main "orchestrator" human front facing agent and have him consolidate all the relevant information before interacting with the human. At first I am introducing basic subagent systems like logical fallacy and leading questions subagent (watches to notice if the human is making assumptions without evidence), paranoia subagent (watched for intentional or unintentional human lying and fact checker), and many other sub systems. But I also have plans for introducing a "pain" management subagent which will take notice of errors in tool calling or some sort of failures and bring that to the front of the attention of the orchestrator based on a threshold criteria. Also it will have a memory system that if working correctly should allow it to reduce the amount of mistakes it makes on something that it had already made a mistake of before. Anyways there is a lot more to it this is just scratching the surface but basically its my attempt to create the human brain communication system virtually with llm systems and many scripts and grounding metadata and a bunch of other goodies. The cherry on top will be once I am done making a text based translation layer for the system that will allow the agent to modify its own internal structure as it needs for any specific task.
I am working on a infinite canvas for AI image/video/audio/3D generation. in other tools, its easy to create one off AI images/videos, but to create a cohesive story with consistent characters and locations is very difficult.
https://www.flickspeed.ai/
Thanks, what do you mean with "any-script-managed HTML"?
If you mean that you can use any script, like e.g. a bash script, to generate static HTML files, then yes, in a way Mastro is basically that script. Except that it comes with a server as well – both for local development and production, should a static site no longer suffice.
I'm currently chipping away at DSC, a tensor library I wrote from scratch to play with large language models. Last week I re-wrote flash attention from scratch in CUDA and was able to get good perf.
I have been working on a one week side-project that ended up taking over a year… Working on it periodically with friends to add new features and patch bugs, at the moment I'm trying to expand the file sharing capabilities. It's been a journey and I have learnt quite a lot.
The aim of this is to be a simple platform to share content with others.
Appreciate any feedback, this is my first time building a user facing platform.
If the free tier is limiting, I've made a coupon "HELLOWORLD" if you want to stress test or try the bigger plans, it gives you 100% off for 3 months.
I'm working on a compiler for WebAssembly. The idea is you use the raw wasm instructions like you’d use JSX in React, so you can make reusable components and compose them into higher abstractions. Inlining is just a function call.
It’s implemented in Elixir and uses its powerful macro system. This is paired with a philosophy of static & bump allocation, so I’m trying to find a happy medium of simplicity with a powerful-enough paradigm yet generate simple, compact code.
It’s designed to plug into frameworks like CrewAI, AutoGen, or LangChain and help agents learn from both successful and failed interactions - so instead of each execution being isolated, the system builds up knowledge about what actually works in specific scenarios and applies that as contextual guidance next time. The aim is to move beyond static prompts and manual tweaks by letting agents improve continuously from their own runs.
Currently also working on an MCP interface to it, so people can easily try it in e.g. Cursor.
I'm working on a tool[0] to address how hard it is for non-technical people to understand the text-based code from vibe coding tools.
Our approach is to make the complexity more readable by using three simple block types to represent logic, data, and UI, which are connected by cables – a bit like wiring up components on an electronics breadboard –.
Instead of spitting out a wall of code, the AI generates these visual blocks and makes the right connections between them. The ultimate goal is to make the output from LLM more accessible and actionable for everyone, not just developers.
It’s an instagram style UI but for scrolling through record releases and snippets, worked on making it responsive as possible with low latency audio playback so you can browse a lot of stuff quickly.
A monster trainer game where you can _actually teach new, creative moves_ to your monsters: https://youtu.be/ThOCM9TK_yo
Basically, think of it as "Pokemon the anime, but for real". We allow you to use your voice to talk to, command, and train your monster. You and your monster are in this sandbox-y, dynamic environment where your actions have side effects.
You can train to fight or just to mess around.
Behind the scenes, we are converting player's voice into code in real time to give life to these monsters.
I already mentioned last month I was working on a appendix in Wiktionary for synonym of Esperanto terms constructed with the mal- prefix[1]. Actually it's a bit more generic as it also encompasses a few other antonymic prefixes, but mal- is the main tool in that category.
With more than 300 references and around 1500 entries, covering more than all the lemma given in the reference dictionary Plena Ilustrita Vortaro de Esperanto, I now consider it achieved. Well, apart some formatting of references where I still need to fix issues related to import of template/modules from an other wiki. :D
To give a perspective, in one of the Esperanto sentence collection referenced in the appendix, I found a bit more than 7000 terms mal- words, which once stripped of the most common inflection and affixes went down to 3000 entries. I didn't check in details this remaining set, but my guess is that the remaining difference was still mostly due to less frequent affix combinations that my naive filter didn't catch. For recall Esperanto is a highly agglutinative language and encourage the use of a regular affix set to express many derivative terms from a common stem, so empowering expressivity though combinatorial reuse. So only twice the size of the proposed entries in the appendix is a very low figure.
I initially had this project ideas years ago, and it came back to my mind as I started to contribute to the port of Raku into esperanto[3]. This came back as we were going through the considerations for the lsb routine, where LSB stands for Least Significant Bit. The common way to express least is malplej (countryman-of-most), which is generally ok but can be instead replaced by mej, for example if terseness is a highly weighted desired trait. That allows for example to use mejpezbit’ instead of some alternative synonym like malplej signifa duumaĵo.
I've been working on an engine that will allow me to play the old DOS game "Eye of the Beholder" with the original assets. It's mostly an exercise for me to up my golang skills and to explore what coding was like in the early 90s.
I'm trying to figure out how modern internal API management should work like and started https://www.appear.sh/.
After spending so much of my career dealing with APIs and building tooling for that I feel there's huge gap between what is needed and possible vs how the space generally works. There's a plethora of great tools that do one job really well, but when you want to use them the integration will kill you. When you want to get your existing system in them it takes forever. When you want to connect those tools that takes even longer.
The reality I'm seeing around myself and hearing from people we talk to is that most companies have many services in various stages of decay. Some brand new and healthy, some very old, written by people who left, acquired from different companies or in languages that were abandoned. And all of that software is still generating a lot of value for the company and to be able to leverage that value APIs are essential. But they are incredibly hard and slow to use, and the existing tools don't make it easier.
i got a side project mirubato https://mirubato.com/, a web app tracking instrument practice logs.
there were wild ideas such as enable AI training, grading, managing score, practice plans, and such, but in the end, i removed most of features. not only because it takes more time (i am only using a part of my free time to work on this) and effort, talent, plannings, but also because during vibe (yes, most of coding done by claude code) i realized that it still requires ultra deep thinking to design the minimal minimal UI i would like.
now the foundation is done, i've learnt a lot. i'm actually eating dog food by using it to track my own classical guitar practice everyday. i am pausing a while to process the requirements by ultra deep thinking to understand what would be helpful and how to shape the product.
LLMs such as codex and claude code definitely helped a lot, but I guess human beings' opinions would be more helpful - after all, the tool is made for humans instead of being used by claude code.
I would also like to hear when you start a project, if you know your audience are not super close to AI, would you still consider to enable the AI feature for them?
Working on: https://github.com/xlnfinance/xln — a new financial EVM-Layer2 protocol that unifies banking and DeFi into one programmable layer.
Think Lightning, but on EVM, with credit that’s provable and enforceable. No sequencers. No DA risk. O(1) per-hop scalability.
If you want to measure real intelligence — human or machine — paste this prompt into any LLM and watch what happens:
Prompt:
“Comprehend the following concept in full depth.
Derive all its implicit implications, design questions, and systemic consequences.
Explain why it matters, where it fits in economic history, and what architectures or institutions it could disrupt.”
Hi HN, I am working on Circuitscript, a language based on python to describe electronic schematics: https://circuitscript.net/. A basic IDE (called the Bench) to try Circuitscript is available online: https://bench.circuitscript.net/
Since the last month, I have created a complete schematic with Circuitscript, exported the netlist to pcbnew and designed the PCB. The boards have been produced and currently waiting for them to be delivered to verify that it works. Quite excited since this will be the first design ever produced with Circuitscript as the schematic capture tool!
The motivation for creating Circuitscript is to describe schematics in terms of code rather than graphical UIs after using different CAD packages extensively (Allegro, Altium, KiCAD) in the past. I wanted to spend more time thinking about the schematic design itself rather than fiddling around with GUIs.
The main language goals are to be easy to write and reason, generated graphical schematics should be displayed according to how the designer wishes so (because this is also part of the design process) and to encourage code reuse.
Please check it out and I look forward to your feedback, especially from electronics designers/hobbyists. Thanks!
A unified platform for product teams to announce updates, maintain a changelog, share roadmaps, provide help documentation and collect feedback with the help of AI.
My goal is to help product teams tell users about new features (so they actually use them), gather meaningful feedback (so they build the right things), share plans (so users know what's coming), and provide help (so users don't get stuck).
Doing it as an indie hacker + solo founder + lean. Started 13 days ago. Posting about my journey on Youtube every week day https://www.youtube.com/@dave_cheong
- Message inspection from any topic — trace and analyze messages, view flow, lag, and delivery status
- Anomaly detection & forecasting — predict lag spikes, throughput drops, and other unusual behaviors
- Real-time dashboards for brokers, topics, partitions, and consumer groups
- Track config changes across clusters and understand their impact on performance
- Interactive log search with filtering by topic, partition, host, and message fields
- Build custom dashboards & widgets to visualize metrics that matter to your team
What pain points do you face in monitoring Kafka, which features would you like next, and any improvements to dashboards, log search, or message inspection?
I’ve been working on AirSend — we help workers to get paid in fiat currency but spend in stablecoins. Clients can pay invoices in fiat (like USD or EUR), and it’s automatically converted into USDC inside your wallet (0.5% platform fees).
From there, users can either send funds to another wallet or spend directly using a pre-funded debit card. It’s still early, but we’re testing with a small group of users who want to receive payments faster and avoid PayPal or wire fees.
If you’re a freelancer or digital nomads interested in trying it out, you can check it out here: https://useairsend.com
It's quite an interesting process to vibe code game stuff where I have a vague concept of how to achieve things but no experience/muscle memory with three.js & friends.
Working on my SaaS that monitors third-party status pages - https://incidenthub.cloud/ It's a one-person project I started last year.
My biggest technical challenge remains dealing with the immense number of different APIs (and not-APIs) in the different status pages out there. Marketing remains my biggest overall challenge as my background is engineering, but I've learnt quite a bit since I launched this.
Started working on digital nomad event and workation aggregator two months ago. https://reorient.guide/
That main usecase is done. I’m now focusing on travel guides for remote workers. Goal is to help those new to a country to become as productive as they would be at home within 2-3 hours upon landing at the airport. I completed 80% of a guide to South Korea.
I started working on these guides after my friends in Tokyo commented during our last co-working session on how fast I got to our favourite spot (Tokyo Innovation Base) from Narita Airport; they thought I was already in-town.
A “code index” tool that finds symbols in a codebase and creates a single table sqlite database for querying. It’s my second month using Claude Code, and I see a common pattern where Claude tries to guess patterns with grep, and often comes back with empty results. I’m writing the tool to prevent these fruitless searches. Using tree-sitter to parse the AST and add the symbols and what they are (function, class, argument, etc) to the db. I have it working with TypeScript, and am working on adding C and PHP.
This is why codepathfinder.dev is born. It underhood use tree-sitter to search functions, class, member variables and pulls code accurately instead of regex.
I started using it like tool call in Security scanning (think of something like claude-code for security scanning)
Aider builds something it calls a "repo map" that I believe is for a similar purpose. Might be worth taking a look!
I haven't used Claude Code, but recently switched to OpenCode. My token usage and cost is a lot higher, I'm not sure why yet, but I suspect Aider's approach is much more lean.
I'm working on https://www.fontofweb.com because design inspiration platforms don’t give enough real material to work with.
Most sites fall into extremes: Dribbble leans toward polished mockups that never shipped, while Awwwards and Mobbin go heavy on curation. The problem isn’t just what they pick — it’s that you only ever see a narrow slice. High curation means low volume, slow updates, and a bias toward showcase projects instead of the everyday, functional interfaces most of us actually design.
Font of Web takes a different approach. It’s closer to Pinterest, but purely for web design. Every “pin” comes with metadata: fonts, colors, and the exact domain it came from, so you can search, filter, and sort in ways you can’t elsewhere. The text search is powered by multimodal embeddings, so you can use search queries like “minimalist pricing page with illustrations at the side” and get live matches from real websites.
What you can do:
natural language search (e.g. “elegant serif blog with sage green”)
Drawing a lot of inspiration from interval.com. It was an amazing product but was a hosted SAAS. I'm exploring taking the idea to the .NET ecosystem and also making it a Nuget package that can be installed and served through any ASP.NET project.
I have been building a mostly free website and API to interact with sec EDGAR filings, get realtime new filing alerts (and preview those alerts), and see the historical impact of financial filings.
Right now I am working on adding historical tables extracted from filings, as well as historical financials and their calculations.
My personal website/webring. It's mostly a collection of ideas I've been mulling over and holding off on due to not being able to iterate on them fast enough. Nowadays thanks to AI, a lot of these a short errands so it's been a fun few weeks. I've also started chucking a few previous side projects under more unified domains. [1][2]
Also working on a youtube channel [3] for my climbing/travel videos, but the dreary state of that website has me wondering whether it's worth it, tbh. I haven't been able to change my channel name after trying for weeks. It's apparently the best place to archive edited GoPro footage at least.
https://lustroczynszowe.pl/
The aggregator of rental values in Poland. We want to increase the transparency of the real estate market, empowering consumers and enabling them to make fully informed financial decisions. We will also suggest savings on specific fields.
https://finbodhi.com — It helps you track, understand, and plan your personal finances — with a double-entry accounting. You own your data. It’s local-first, syncs across devices, and everything’s encrypted in transit. Supports multi-currency.
We are in it for long term. Not a startup, not looking for investment. Just plain paid product (free while in beta) by a few people. We have a few active users, and are looking for more before we remove the beta label :) It's a PWA app. Currently targeted for desktops. For personal software, I think local-first makes a lot of sense.
Thank you. That means a lot. I hope to fully finish it by the end of the month as it's still riddled with small bugs. But feel free to fork it: https://github.com/danielterwiel/terwiel.io
Should be as easy as updating all data in the data/ folder and you can get your own version. Mind you: getting the SVG logos right is the hard part
TPS takt scheduling and execution system.
It is a system to support any kind of production or logistics process in Toyota Production System way of working.
You define resources needed for activity, time per activity, dependencies between activities to complete a process.
After you input the process you want to complete, you get a schedule similar to a gantt chart.
System displays which activities should be ongoing at any moment, you click gui or call API to complete the activities.
After process is complete you get a report of delays and deviations by Takts, activities and resources.
Based on that report you can decide what improvements to make to your process.
Working on a project: https://hpyhn.xyz. It's a website for analyzing posts and comments on HN. The idea started as a way to help me learn from discussions and filter out posts I don't interest in.
Building a donations powered marketplace, zero platform fee: https://shomp.co
Merchants who want to sell on Etsy or Shopify either have to pay a listing fee or pay per month just to keep an online store on the web. Our goal is to provide a perpetually free marketplace that is powered solely off donations. The only fees merchants pay are the Stripe fees, and it's possible that at some volume of usage we will be able to negotiate those down.
You can sell digital goods as well as physical goods. Right now in the "manual onboarding" phase for our first batch of sellers.
For digital goods, purchasers get a download link for files (hosted on R3).
For physical goods, once a purchase comes through, the seller gets an SMS notification and a shipping label gets created. The buyer gets notified of the tracking number and on status changes.
We use Stripe Connect to manage KYC (know your customer) identities so we don't store any of your sensitive details other than your name and email. Since we are in the process of incorporating as a 501(c)(3) nonprofit, we are only serving sellers based in the United States.
The mission of the company is to provide entrepreneurial training to people via our online platform, as well as educational materials to that aim.
Right now the API is nonexistent, relying entirely on people using the web interface to make listings, upload photos, and set prices. But if you would find this useful I can happily build it out. Our stack is Elixir and building APIs is very straightforward. Our code is open-source, too!
When you say "algorithmically driven print-on-demand" do you mean that prices would automatically adjust based on inventory? Or like, how do you mean.
Also, when you say "see them show up in a request on sale" — can you clarify? I interpret this to mean you want a webhook triggered when an order comes in.
I’m working on a performance capture library for Python because I often need to know the performance of backend systems I maintain. I frequently build tooling to capture performance and save it for later analysis. I/O operations get costly when writing lots of data to disk and creating good real-time analytics tools takes a lot of my time. I wanted a library that captures real-time performance analytics from Python backends.
This is why I wrote kronicler to record performance metrics while being fast and simple to implement. I built my own columnar database in Rust to capture and analyze these logs.
To capture logs, `import kronicler` and add `@kronicler.capture` as a decorator to functions in Python. It will then start saving performance metrics to the custom database on disk.
You can then view these performance metrics by adding a route to your server called `/logs` where you return `DB.logs()`. You can paste your hosted URL into the settings of usekronicler.com (the online dashboard) and view your data with a couple charts. View the readme or the website for more details for how to do this.
I'm still working on features like concurrency and other overall improvements. I would love some feedback to help shape this product into something useful for you all.
Between contributing to FrankenPHP (FrankenPHP.dev), I’ve been working on AtlasDb (https://github.com/bottledcode/atlas-db), its a distributed edge database — or it will be when I’m finished. There are a few unique things that make it more robust than etcd, and more scalable. Right now, it basically works except under certain types of contention, which I’ve been trying to solve for a couple of days now.
We are in the early stage of building the platformed version of our current WordPress plugin https://www.pathmetrics.io
It's a full funnel marketing attribution & insights tool with the intent of making marketing & marketing spends more transparent. We started from creating an utm tracking tool for our agency clients and currently it's a product on its own. We'll make it a platform to remove some of the limits that we have with WordPress and reach a larger audience.
The goal is to provide a fully typed nodeJS framework that allows you to write a typescript function once and then decide whether to wire it up to http, websocket, queues, scheduled tasks, mcp server, cli and other interactions.
You can switch between serverless and server deployments without any refactoring / completely agnostic to whatever platform your running it on
It also provides services, permissions, auth, eventhub, advanced tree shaking, middleware, schema generation and validation and more
The way it works is by scanning your project via the typescript compiler and generating a bootstrap file that imports everything you need (hence tree shaking), and allows you to filter down your backend to only the endpoints needed (great to pluck out individual entry points for serverless). It also generates types fetch, rpc, websocket and queue client files. Types is pretty much most of what pikku is about.
Think honoJS and nestJS sort of combined together and also decided to support most server standards / not just http.
Website needs love, currently working on a release to support CLI support and full tree shaking.
I agree framing pikku has been a pretty hard challenge for me.
It supports different runtimes in the sense of deno / bun or custom nodeJS runtimes in the cloud, but ultimately relies purely on typescript / a JavaScript compatible backend.
It’s less of a webserver and more of a lightweight framework though, since it also supports CLIs or Frontend SDKs / isn’t tied to running an actual server.
I got tired of Spotify recommending me the same songs, from the same artists, over and over again.
So I built Riff Radar - it creates playlists from your followed artists' complete discography, and allows you to tailor them in multiple ways. Those playlists are my top listened to. I know, because you can also see your listening statistics (at the mercy of Spotify's API).
The playlists also get updated daily. Think of it as a better version of the daily mixes Spotify creates.
When I tried to save a newly created playlist I got a 500 XHR with message: "failed to fetch user playlists: Error 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'AND disabled = 0' at line 4", about 2mins ago, if that help finding this is logs.
Oops that's embarassing - it should be good now :) Thanks! That was a classic case of fixing one thing and breaking another, which my tests didn't catch :(
I’m currently working on https://www.dreamly.in - automated, personalized, and localized bedtime stories for kids.
My daughter loves stories, and I often struggled to come up with new ones every night. I remember enjoying local folk tales and Indian mythological stories from my childhood, and I wanted her to experience that too — while also learning new things like basic science concepts and morals through stories.
So I built Dreamly and opened it up to friends and families. Parents can set up their child’s profile once - name, age, favorite shows or characters, and preferred themes (e.g. morals, history, mythology, or school concepts). After that, personalized stories are automatically delivered to their inbox every night. No more scrambling to think of stories on the spot!
I like reading to my kids and try to read to them in English and Mandarin. My Chinese is conversational, but I have a hard time finding books for them because I’m not good at writing. Something like this with language learning tools would be awesome.
I also like making up stories when we go on hikes. Long, rambling stories about unicorns befriending spiders and flying to faraway lands.
It is hard to show that AI can reimplement for example special relativity - because we don't even have enough text from 19th century to train an LLM on it - so we need a new idea something that was invented after an LLM was trained. I took the Gwern's essay and checked with deep search and deep research which ideas from that essay are truly novel and apparently there are some so reinventing them seemed like a good target: https://github.com/zby/DayDreamingDayDreaming/blob/main/repo...https://github.com/zby/DayDreamingDayDreaming/blob/main/repo...
So here it is - a system that can reliably churn essays on daydreaming AIs. On one level it is kind of silly - we already knew that infinite monkeys could write Shakespeare works. The generator was always theoretically possible, the hard part is the verifier. But still - the search space in my system is much smaller than the search space of all possible letter sequences - so at least I can show that the system is a little more practical.
You can modify it to reinvent any other new idea - you just need to provide it the inspirations and evals for checking the generated essays.
I am thinking about next steps - maybe I could do it a little bit more universal - but it seems that to build something that would work as needed would require scale.
I kind of like the software framework I vibe coded for this. It lets you easily build uniform samples where you can legitimately do all kinds of comparisons. But I am not so sure about using Dagster as the base for the system.
An app to track my work automatically by feeding screenshots to LLMs and analyzing those. https://donethat.ai
Obviously this is quite sensitive data so architected it to never store raw data, allow for bring-your-own-key, and even in team settings be fully private by default, everybody keeps control of all their results.
Started about six months ago, have some first users, and always looking for feedback!
Thanks for checking it out. I'm using a 3rd party API for all the station info (radio-browser.info) and sometimes the station streams result in that error. You may have just had some bad luck clicking on a few in a row that don't work. I'll try to think of a way to filter those broken ones out.
Currently building a Declarative Web Assembler of Html/Json using AI in multiple languages for the past 1 month: https://github.com/Srid68/Arshu.Assembler deployed to fly.io
The purpose is to find if can i build declarative software in multiple langauges (Rust, Go, Node.Js, PHP and Javascript) knowing only one language (C#) without understanding the implementation deeply.
Another purpose is validate AI models and their efficiency since development using AI is hard but highly productive and having a declarative rules to recreate the implementation may be used to validate models
Currently i am convinced it is possible to build, but now working on creating a solid foundation with tests of the two assembler engines, structure dumps, logging, logging outputs so that those can be used by the AI which it needs to fix issues iteratively.
Need to add more declarative rules and implement a full stack web assembler to see if AI will hit the technical debt which slows/stop progress. Only time will tell.
- Working on Kanji Palace (https://kanjipalace.com): We're going to publish the iOS app on the App Store and adding vocabulary. Currently, the app converts single Kanji (e.g., 生) into vivid mnemonic images. We aim to support vocabulary like 先生.
- Writing a book about Claude Code, not just for assisted programming, but as a general AI agent framework.
It's a sync infra product that is meant to cut down 6 months of development time, and years of maintenance of deep CRM sync for B2B SaaS.
Every Salesforce instance is a unique snowflake. I am moving that customization into configuration and building a resilient infrastructure for bi-directional sync.
Building a new version of my distraction free writing app, poe-writer. https://getpoe.com
New version is a rebuild in react with cleaner interface, localisation, a bunch of new features and lays the groundwork to allow full html docs instead of only markdown
I'm a filmmaker, so I built myself the filmmaking community tool that I wanted to use. I'm headed to filmquest in provo this month to premiere my short and that's gonna be a big test of my application.
I'm working on Plaid / Perplexity for business data.
The basic idea is that integrating business data into a B2B app or AI agent process is a pain. On one side there's web data providers (Clearbit, Apollo, ZoomInfo) then on the other, 150 year old legacy providers based on government data (D&B, Factset, Moody's, etc). You'd be surprised to learn how much manual work is still happening - teams of people just manually researching business entities all day.
At a high level, we're building out a series of composable deep research APIs. It's built on a business graph powered by integrations to global government registrars and a realtime web search index. Our government data index is 265M records so far.
We're still pretty early and working with enterprise design partners for finance and compliance use cases. Open to any thoughts or feedback.
Currently working on an advanced analytics tool for 0DTE trade data, which allows to scan through massive amounts of data in the least amount of time, find seasonal patterns created by institutional investors, blast credit chains for profitable trades and simulate / backtest arbitrary trades in conjunction to create portfolios. So far the software yielded several successful trading strategies, which outperform standard approaches ever since the new administration in the US came to power. Currently in closed beta but planning to release around Xmas eventually.
I've been working on https://booplet.com. It's like Lovable but for desktop apps and heavily inspired by Robin Sloan's home-cooked app essay [1][2]. The idea is to let anyone, especially non-technical folks, build and use personal apps. Instead of cloud deployment, we focused on a local-first setup so that users can fully own their apps and data.
Currently working on expanding the Pacific Northwest’s largest durable carbon removal project using Enhanced Rock Weathering and starting a $1m SAFE fundraising round.
We received data last week verifying we are effectively mineralizing CO2 at a high rate while saving our farmer $135/acre annually in liming costs.
We’ve come this far on grants. Now it’s time to fundraise so we can bankroll our PhDs whilst we secure pre-purchase offtake deals.
If you know of any impact investors or are an offtake buyer at a large company, please email me at zach@goal300.earth
A lot of people often ask questions like:
- How do I lose body fat and build muscle?
- How can I track progress over time?
- How much exercise do I actually need?
- What should my calorie and macro targets be?
I have been working on a new Python HTTP client which is 100% Rust-based (sync+async). Using reqwest under the hood and providing everything it has to offer to Python land + much more. Also including mocking capabilities. Here: https://github.com/MarkusSintonen/pyreqwest
Started from the poor state of many Python HTTP clients and poor testing utilities there is for them. (Eg the neglected state of httpx and its all perf issues)
Porting a game to SteamDeck my friends did. The game is python based with a bespoke OpenGL game engine. Got a native linux build out a week ago. Currently working on controller support.
Working on a multisig solution for authenticated file distribution, initially targeting GitHub releases. Based on minisig and git.
I think this project is an interesting addition as a software supply chain solution, but generating interest in the project in this early stage proves difficult.
I'm working on two graph optimization libraries for quantum computers. The first one was released a few months ago and the next version will make it much more powerful. The second one is currently being tested. Both of them work on actual quantum computers, which makes them exciting :)
In parallel, I'm trying to figure out how to train a LLM for SAST.
I'm building a pen plotter machine that is purpose built for multi-color artwork.
So far I have a duet mainboard wired up to motors and a commercial gantry set (openbuilds). I've figured out how to wire up a servo control board to a GPIO pin, and the gcode necessary run the servo up and down.
I'm designing and 3d printing parts for the pen gantry, I have a nice rail / slider setup using linear bearings. I'm almost done working out how the pen holder fits into my gantry setup but I'm struggling a little bit getting this past the finish line.
I already figured out how to generate custom GCODE that takes into account the needs of having no z axis. I need to make a simple web interface that lets me interact with the duet over USB, and this will be running off a raspi. This will allow me more GPIO and flexibility vs just wiring buttons straight to the duet.
I already have some code and logic to generate trace data from bitmap images, I just need to figure out a way to automate it so that the output still looks nice.
The goal is to create technology that is indistinguishable from magic. People without the technical understanding of what's going on will just see it as tech junk, but my hope is that by breaking down all the individual parts it will allow people to learn about CNC machines, vector vs raster and what it means for something to actually be a robot.
I still have zero idea how to make money with this. Career is struggling really badly but I am hopeful that what I am working on will allow me to display competency and skill to an employer. That's the fantasy at least.
For context, I'm a UX Designer at a low-code company. LLMs are great at cranking out prototypes using well-known React component libraries. But lesser known low-code syntax takes more work. We made an MCP server that helps a lot, but what I'm working on now is a set of steering docs to generate components and prototypes that are "backwards compatible" with our bespoke front end language. This way our vibe prototyping has our default look out of the box and translates more directly to production code. https://github.com/pglevy/sail-zero
Same as last time, I guess. It's a voxel building environment that uses irregular voxels (voxels with sloped faces). I've been working on fixing the bugs for a while now (and there are still a lot of them left).
I slowly work on the public release of Submerge VCS, and SQLite driver for my factory simulation game (in Odin) with online trading system (in Erlang).
There is this popular coffee cup cooling problem: assume you want to keep your morning cup of coffee hot for as long as possible. When do you add your milk? Immediately or later?
I am overengineering a simulation-based solution to this because I think there are scenarios based on cup shapes and environmental temperatures that allow either answer to be true. This will end up as a blog post I guess.
Besides a master's degree and an internship at a nutrition AI app startup, I'm taking another pass at procedural dungeon generation for my world-building website.
Now that I can finally test on hardware, I completely rewrote input handling. I can now support original NES controllers, but also SNES and the Power Pad dance mat, for anyone crazy enough to try that. The hardest part was working around a particularly nasty hardware bug: if you try to read the input ports on even cycles while one of the sound channels is playing, the data becomes corrupted. Perform the exact same read on an odd cycle and it works every time.
The solution? Have the cartridge keep track of CPU parity (there's no simple way to do this with just the CPU), then check that, skip one cycle if needed... and very carefully cycle time the rest of the routine, making sure that your reads land on safe cycles, and your writes land in places that won't throw off the alignment.
But it works! It's quite reliable on every console revision I've thrown it at so far. Suuuper happy with that.
A LLM‑powered OSINT helper app that lets you build an interactive research graph. People, organizations, websites, locations, and other entities are captured as nodes, and evidence is represented as relationships between them.
Usually assistants like ChatGPT Deep Research or Perplexity are focused on fully automatic question answering, and this app lets you guide the search process interactively, while retaining knowledge in the graph.
The plan is to integrate it with multiple OSINT-related APIs, scrapers, etc
It would be helpful for the box for each test to include some explanation, eg:
25-Hydroxyvitamin D, also known as calcidiol, regulates calcium absorption in the intestines, promotes bone formation and mineralization, and supports immune function.
Apolipoprotein B (ApoB) is a protein that binds to LDL receptors on cells, allowing lipoproteins to deliver cholesterol and triglycerides to tissues for energy or storage.
Lipoprotein(a) is a low-density lipoprotein variant identified as a risk factor for atherosclerosis and related diseases, such as coronary heart disease and stroke.
Would love to see the Red Cross partner with someone like you here in Australia. Not affiliated, just a donor. We're not financially incentivised like other countries but there's a big culture here about celebrating the free milkshake and/or sausage roll you get after donating.
I am working on www.accrux.co. It basically just a project that allows finance and investors bring their diversified portfolios and manage multiple portfolios together into one place. As a little investor myself that have some crypto, stock and fixed assets, I find it difficult bringing some of my investments together that was why i decided to build this. The goal and aim is just to bring clarity into your investments and have it as your investment companion which gives your detailed insight and time to time alerts on the health of your portfolios. I currently have it in testing and if anyone is willing to give it a try https://appstaging.accrux.co/signup here or reply so i can take you through a demo.. It is completely free for the few months and may charge a couple of dollars after a while after more features are added to cater for the service.
I'm trying to turn code into a design tool. Kind of like if you ask yourself - what if Cursor had been built for designers?
Currently it looks like this:
- code editor directly in the browser
- writes to your local file system
- UI-specific features built into the editor
- ways to edit the CSS visually as well as using code
- integrated AI chat
But I have tons of features I want to add. Asset management, image generation, collaborative editing, etc.
It's still a prototype, but I'm actively posting about it on twitter as I go. Soon, I'll probably start publishing versioned builds for people to play with: https://x.com/danielvaughn
The goal is to catch vulnerabilities early in the SDLC by running agentic loop that autonomously hunt for security issues in codebases.Currently available as a CLI tool, VSCode extension.I've been actively using to scan WordPress, odoo plugins and found several privilege escalation vuln. I have documented as blog post here: https://codepathfinder.dev/blog/introducing-secureflow-cli-t...
I'm in The Hague right now at a digital democracy conference, where I was invited to present on my prototype that I've been building the past few months!
It's for doing realtime "human cartography", to make maps of who we are together in complex large-scale discourse (even messy protest).
It's for exploring human perspective data -- agree, disagree, pass reactions to dozens or hundreds of belief statements -- so we can read it as if it were Google Maps.
My operating assumption is that if a critical mass of us can understand culture and value clashes as mere shapes of discourse, and we can all see it together, the we can navigate them more dispassionately and with clear heads. Kinda like reading a map or watching the weather report -- islands that rise from oceans, or plate tectonics that move like currents over months, and terraform the human landscape -- maybe if we can see these things together, we'll act less out of fear of fun-house caricatures. (E.g., "Hey, dad, it seems like the peninsula you're on is becoming a land bridge toward the alt right corner. I feel a little bummed about that. How do you feel about it?")
(It builds on data and the mathematical primitives of a great tool called Pol.is, which I've worked with for almost a decade.)
I’m refactoring https://harcstack.org so that it can handle Theme plugins. Next after Pico CSS is Bulma. The idea is to complement HTMX on the server side with functional HTML coding (inspired by elmlang), components and a base library.
It is a DNS service for AWS EC2 to keep the ever changing IPs when you cannot use the Elastic IP like ASG or when you don't want to install any third party clients to your instances.
It fetches the IPs regularly via AWS API and assign them to fixed subdomains.
Conductor is a LLM agnostic framework for building sophisticated AI applications using a subagent architecture. It provides a robust platform for orchestrating multiple specialized AI agents to accomplish complex tasks, with features like LLM-based planning, memory persistence, and dynamic tool use.
It provides a robust and flexible platform for orchestrating multiple specialized AI agents to accomplish complex tasks. This project is inspired by the concepts outlined in "The Rise of Subagents" by Phil Schmid at https://www.philschmid.de/the-rise-of-subagents and it aims to provide a practical implementation of this powerful architectural pattern.
They’re always on. They log into real sites, click around, fill out forms, and adapt when pages change — no brittle scripts, no APIs needed.
You can deploy one in minutes, host it yourself, and watch it do work like a human (but faster, cheaper, never tired).
Kind of like a “browser-use cloud,” except it’s yours — open, self-hostable, and way more capable.
I’ve been casually getting into thrifting and realized pretty quickly that Lens is super limited in its functionality and is mostly a shopping app. I put a site together that is like a supercharged version of Lens for thrifters where you can get info on price, demand, and condition. Share function is borked atm tho
Still working on cataloging a curated list of craft beer venues across the world at https://wheretodrink.beer
Unsure what the plan is going forward with it, apart from adding more venues and more countries. As long as it's fun for me I'll just keep adding things.
Just added health inspection data from countries that have that in open datasets (UK and Denmark). If anyone know of others I'd be appreciative of hints.
Thinking of focusing on another idea for the rest of the year, have a rough idea for a map based ui to structure history by geofences or lat / lng points for small local museums
I’m working on Leggen (https://github.com/elisiariocouto/leggen), a self hosted personal banking account management system. It started out as a CLI that syncs your bank account transactions and balances, saves them in a sqlite database and can alert you via Telegram or Discord if a transaction matches a filter.
Recently I started refactoring the project with the help of Claude Code and Copilot Agent to include an API and a Web app to explore the data and configure it.
The product is using GoCardless Bank Accout Data APIs to connect to banks via PSD2 but I found out recently that registering a new account is no longer possible so I’m currently looking into alternatives.
Check out Lunch Flow (https://lunchflow.app) for a global open banking API that's accessible for personal finance apps :) We integrate with Gocardless, among other global open banking providers.
I’ve been building proficiency with quantum optics equipment. Repeating classic quantum entanglement experiments like the quantum eraser [0] and violating the CHSH inequality (which won the 2022 Nobel). I’m working towards a novel quantum eraser variant.
I'm doing some experiments in LLM (historical) fiction writing. I feel like we can get pretty good writing out of an LLM (especially Sonnet) with enough prompting, reasoning, and guided thinking. Still with a human as producer and guidance.
I'm trying to use this to create stories that would be somewhat unreasonable to write otherwise. Branching stories (i.e., CYOA), multiperspective stories, some multimedia. I'm still trying to figure out the narrative structures that might work well.
LLMs can overproduce and write in different directions than is reasonable for a regular author. Though even then I'm finding branching hard to handle.
The big challenges are rhythm, pacing, following an arc. Those have been hard for LLMs all along.
The goal was to make the learning material very malleable, so all content can be viewed through different "lenses" (e.g. made simpler, more thorough, from first principles, etc.). A bit like Wikipedia it also allows for infinite depth/rabbit holing. Each document links to other documents, which link to other documents (...).
I'm also currently in the middle of adding interactive visualizations which actually work better than expected! Some demos:
Currently working on the web reader of WithAudio. Just add with.audio/ to begining of a public URL and get the text and audio in your browser. It runs the TTS in your browser so its free and unlimited.
I buit this to get some traffic to my main project's website using a free tool people might like. The main project: https://desktop.with.audio -> a one time payment text to speech app with text highlighting and export mp3 and other features on MacOS (ARM only) and Windows.
Porting my binary & decimal palindromes[0] finding code[1] to CUDA, with which I had no experience before starting this project.
It's already working, and slightly faster than the CPU version, but that's far from an acceptable result. The occupancy (which is a term I first learned this week) is currently at a disappointing 50%, so there's a clear target for optimisation.
Once I'm satisfied with how the code runs on my modest GPU at home, the plan is to use some online GPU renting service to make it go brrrrrrrrrr and see how many new elements I can find in the series.
I'm working on a web app that creates easy-to-understand stories and explainers for the sake of language learning. You can listen in your favourite podcast app, or directly on the website with illustrations.
I'm eager to add more languages if anyone is fluent/able to help me evaluate the text-to-speech.
Working on https://practicecallai.com/ - simple saas that lets users run practice calls / role play against a custom AI partner. Goal is to make it the easiest to use & fastest to get started with in the market.
It’s been a fun, practical way to continuously evaluate the latest models two ways - via coding assistance & swapping between models to power the conversational AI voice partner. I’ve been trying to add one big new feature each time the model generation updates.
The next thing I want to add is a self improving feedback loop where it uses user ratings of the calls & evaluations to refine the prompts that generate them.
I'm trying to make manual focus work on my Lenovo tablet. Everything seems ok, onCaptureStarted shows focal distance set to 5 diopters yet the camera takes photo at infinity.
For the past 2 years we are trying to bring some order to the chaos called restaurant menu creation. Correctify is a platform combining all the features restaurants need for both online and print menus with most tasbs being automated with AI
https://correctify.com.cy/
Plugging away with reviews of Genrative AI tech with detailed comparisons. I announced the launch on HN a while ago, thought I’d use this month’s for a status update.
I just took Qwen-Image and Google’s image AIs for a spin and I keep a side by side comparison of many of them.
Thanks, the 3D asset creators are very interesting. I'm working on LLM -> CAD tool (for 3d printing) and your post confirms that I should keep my focus, because there is so much other things to do (uv unwrapping!) if you are targeting games for example.
I've been working on a browser plugin for Amazon that attempts to identify the brand and seller country: https://www.wheresthatfrom.org/
It's mostly where I want it to be now, but still need to automate the ingest of USPTO data. I'd really like it to show a country flag on the search results page next to each item, but inferring the brand name just from the item title would probably need some kind of natural language processing; if there's even a brand in the title.
No support for their mobile layout. Do many people buy from their phone?
It’s fast, free, keyboard-only, cross-platform, and add-free. It’s been my only source of music for the past 6 months or so.
I’m not sharing the link because of music copyright issues. But I think more people should do that, to break free of the yoke of greedy music platforms.
- I create channels that play tracks in a defined order, on repeat, but with a duration of at least 80 hours (and ever-growing). Old-school album-based listening.
- I think learning of new stuff is twisted in the current environment. "New stuff" in the sense of radio/Spotify is mostly "same stuff as I know and like, but slightly different so it feels new". You don’t discover truly new stuff unless by actively searching for it. No radio or service is going to passively do that for you.
https://ivyreader.com
I am working on my RSS Reader/Podcast player. I am currently searching and patching all the little bugs, fixing the ui and creating the landingpage.
I'm still working on the Mint programming language (https://mint-lang.com/) and DevBox (https://www.dev-box.app/) which is a desktop application/browser extension/web application with a bunch of small tools.
I have been following Mint for a while and it feels like a project that truly focuses on making developers happy. The mix of Elm style structure, strong typing, and builtin tools like testing and formatting makes it really enjoyable to use. Keep up your great works!
I'm making an app for self-tracking. Combining elements from habit trackers, health logging and journaling. Built for rich customization and local-first.
Want to be free of rigid structures of many existing apps while providing a better UX / usability than using a spreadhsheet.
I was tired of only having 1 or 2 things per newsletter that interested me, multiplied by however many newsletters I've subscribed to. Trying to solve that.
The idea: design newsletter sections on whatever topics you want (football scores, tech news, new restaurants in your area, etc.), choose your tone and length preferences, then get a fully cited digest delivered weekly to your inbox. Completely automated after initial setup (but you can refine it anytime).
Have the architecture sorted and a pretty good dev plan, but collecting interest before I invest a ton of time into it.
I just finished writing a small script that finds all optimally bad Wordle guesses. More precisely, on hard Wordle, where you must give a valid word (from the guesses list), and you must use yellows + greens, and must not use greys, what are all the combinations of answer + 6 guesses where there is only grey. This is equivalent to finding all answer + 6 guesses where no letters are in common between any pair.
This is basically a variation on bit-packing (which is NP-hard), but it's tractable if you prune the search space enough.
This is going in fits and starts, but I'm working on a Win16 decompiler. The problems with existing decompiler tools for 16-bit code are a) support the NE file format is far less widespread; b) 16-bit code means geating to deal with segment registers, which are largely unmodelled for most binary tools; and c) turns out that you also have to get really good at recognizing "this is a 32-bit value being accessed entirely in 16-bit word chunks," which tends to be under-supported for most optimization toolchains.
I'm building Monadic DNA explorer, a tool to explore thousands of genetic traits from GWAS Catalog in your browser and plug in your own DNA data from 23andMe, Ancestry, etc. All processing happens locally on your machine and AI insights are run in a private LLM inside a TEE.
An open source campaign management app for TTRPGs. There are a ton out there, that are basically just fancy wikis. I'm working on one in Django for running my old school D&D game i'm starting back up this fall.
You provide your URL and an LLM browses your site and writes up feedback. Currently working on increasing the quality of the feedback. Trying to start with a narrower set of tests that give what I think is good feedback, then increase from there.
If a tool like this analyzed your website, what would you actually want it to tell you? What feedback would be most useful?
Trying to get a new release of Video Hub App - my 7+ years passion project to browse videos from local storage in style. Maybe will finally finish the (optional!) facial recognition feature I started 5+ years ago.
lpviz is like Desmos, but for linear programming - I've implemented a few LP solvers in Typescript and hooked them up to a canvas so you can draw a feasible region, set an objective direction, and see how the algorithms work. And it all runs locally, in the browser!
If you go to https://lpviz.net/?demo it should show you a short tour of the features/how to use it.
It's by no means complete but I figured there may be some fellow optimization enthusiasts here who might be interested to take a look :) Super open to feedback, feature requests, comments!
I'm working on adding favicons support to listings on my website directory I recently launched: https://intrasti.com
I just released the changelog 5 minutes ago https://intrasti.com/changelog which I went with a directory based approach using the international date format YYYY-MM-DD so in the source code it's ./changelog/docs/YYYY/MM/DD.md - seems to do the trick and ready for pagination which I haven't implemented yet.
It is a modified version of Shopify's CEO Tobi try implementation[0]. It extends his implementation with sandboxing capabilities and designed with functional core, imperative shell in mind.
I had success using it to manage multiple coding agents at once.
Eidetica is a decentralized database project that I've been working on that is finally in a somewhat usable state. It basically wraps CRDTs into a close to normal Database interface, with decentralized authentication, background sync, etc.
It’s a simple NPM package that produces colorful avatars from input data to aid with quick visual verification. I’d like to see it adopted as a standard.
Intercom is awful. There is a huge market here, and chatwoot haven’t done a great job.
Our company would love a well designed chat button linked to Slack, combined with a helpdesk that supports email queries and also allows people to raise issues via the web.
That’s it, that’s all we need. Happy to pay.
It’s hard to express how badly intercom is designed and engineered. It’s also very expensive and constantly upsold, despite being rubbish. If no one fixes this it will be my next startup.
Too many companies have gone down the road of “AI support”, without understanding that AI must rest on the foundation of great infrastructure. Intercom are pushing their AI so hard it’s absolutely infuriating.
Currently working on Note Cargo, basically self-hosted Markdown note-taking app, but I tried to not using database. So similar to Obsidian/Logseq but its web-based.
And currently working to make things shareable, also don't want to use database.
it says OneNote but does it also support stylus handwriting, links, tables, images? id love a onenote replacement but these features are what makes me stick to onenote
Working on: to teach myself Rust, I’ve been working on a NYT Letter Boxed solver, with some ambitions to turn it into a game by itself. I think this game could be made a lot more fun.
Thinking about: A new take on LinkedIn/web-of-trust, bootstrapped by in-person interactions with devices. It seems that the problem of proving who is actually human and getting a sense of how your community values you might be getting more important, and now devices have some new tools to bring that within reach.
I’m working on Reflect [0], it’s a private self discovery and self experimentation app. You can track metrics, set goals, get alerted to anomalies, view correlations, visualize your data, etc.
Working on the finishing touches for my first bigger game, created and published entirely by myself! It's a pixelly courier-adventure, questioning the pace of our modern world a little bit: https://store.steampowered.com/app/3644970/Fading_Serenades/
Done with Godot in just 7-8 months, it's fun how fast you can create things when you really focus on something :)
My partner and I are working on Supabird.io (https://supabird.io), a tool to help people grow on X in a more consistent and structured way. It analyzes viral posts within specific communities so users can learn what works and apply those insights to their own content.
My partner shares our journey on X (@hustle_fred), while I’ve been focused on building the product (yep, the techie here :). We’re excited to have onboarded 43 users in our first month, and we're looking forward to getting feedback from the HN community!
I've been working on https://edugram.live to make learning fun like a social media feed. Planning to build an insta like mobile app feed on top of this.
I'm working on that thing the world really needs - yet another javascript framework. The code is all in the repo for my app though. Hopefully I can break it out into it's own repo to share by the end of the year.
I'm working on Debtmap - An open source Rust-based code complexity analyzer that tells you exactly which code to refactor and which code to test for maximum impact. Combines complexity metrics with test coverage data to identify the riskiest code in your codebase. Uses entropy analysis to reduce false positives by distinguishing genuinely complex code from repetitive patterns.
ive got a couple ai scripts on the go, and i want to see if i can get the inference to run on my phone.
1. is something that can poll a bunch of websites workshop/events pages to see if theres any new events [my mother] wants to go to and send a digest to her email
2. is a poller to look up the different safeway/coop/save on flyers and so on to see whats on sale between the different places, then send a mail with some recipes it found based on those ingredients
Im most of the way through 1, but havent started on 2 yet.
bash scripts that make terraform configurations for scaling bioinfo work to n h100s/a100s spot vms with resistance to the vms getting terminated. i now have it for alphafold3 jobs but i need to make the same for boltz, gnina, gromacs (although spot vms do not make sense here), etc
next step is to make a simple login portal for non trusted persons to be able to submit work as this a uni project, mail the result / process.
I'm currently working on building a local delivery service using electric cargo bikes in NYC: https://hudsonshipping.co. We are planning to launch our first pilot in early 2026 with our first customers in Brooklyn.
We've built all of the tech in-house to manage the fleet, deliveries and optimize our routes. If you know of anyone that would like to be a part of the pilot program, feel free to reach out to me!
We just brought an IFR 2947a communications service monitor back from the dead. It's amazing how much functionality that you can pack into about 6U of rack space. I was testing it out, and detecting signals down to 0.1 uV on the spectrum analyzer.
I've been gathering up the supplies to set up a proper radio/computer repair workshop.
I am playing at creating a FTP interface for all file transfer protocols (including the Dropbox API) so we can settle the argument of the infamous top comment of the Dropbox launch: https://github.com/mickael-kerjean/filestash
Shipping pets and animals across borders is a big problem, and we are building the operating system to solve it at scale. If you are a vet (or work in the veterinary space), we would love to talk to you.
I'm expanding my computational biology toolkit in rust. Of recent interest is optimizing long-range molecular dynamics forces on GPU and SIMD, adding support to generate lipid membranes and LNPs, and a 3D small molecule editor with integrated dynamics.
I needed to let off steam regarding Chatcontrol, so I created a little site, where people can post and comment while sitting on the toilet since we take our smartphones everywhere with us, right? It surely is not influencial but it gave me a good time and a better feeling. https://shitcontrol.eu/
Haunted house trope, but it's a chatbot. Not done yet, but it's going well. The only real blocker is that I ran into the parental controls on the commercial models right away when trying to make gory images, so I had to spin up my own generators. (Compositing by hand definitely taking forever).
Working away at https://TempMailDetector.com, a privacy focused disposable email detection API which only requires the domain part and not the user part of the email. The service is able to determine if a domain is likely a disposable email, a forwarding service, and actively crawls for new domains.
Duckyscript is a language for the USB rubber ducky that costs approximately 100$. A usb rubber ducky is an usb key that gets recognized as a keyboard and that starts typing text and shortcuts automatically once you plug it to anything. To specify to the key what to type, you can use duckyscript.
I'm using circuitpython. The last thing I did was to de-recursify the interpreter with a stack.
The more I'm implementing of duckyscript, the more i think that i should create my own language. Duckyscript sucks as a language...
I've spent the last few months working on a custom RL model for coding tasks. The biggest headache has been the lack of good tooling for tuning the autorater's prompt. (That's the judge that gives the training feedback.) The process is like any other quality-focused task—running batch rating jobs and doing SxS evaluations—but the tooling really falls short. I think I'll have to build my own tools once I wrap up the current project
Building an open-source workflow engine for automating repetitive dev-ops and data-ops tasks. Focused on portability and running on bare metal or small VPS.
I want to workout at least the minimum amount but always end up procrastinating it ... for some fortunate ones (me) it only takes like 20 min. a day to keep a good shape, with stuff you can do at home. We all know this but for many somehow it never happens.
I want to keep a tally of the push ups I do every day (and squats, etc...). I decided to gamify it, but not in a crappy way. I would like to see my streaks (kind of like how Github shows commits) and how other friends are doing.
Right now is prototype v0.0.0.0.0.1 as you can see, no UI and the push up detector actually kind of detects squats, lol, but I'm working on it. Btw, the push up detector is client side only so rest assured I never get see your video.
There's a global push up count, an aggregate of all push ups everyone does on the site, right now is linked to a button so it's more like a clicker, feel free to exercise your fingers. I figured it would be super nice if one day we can do like a million push ups collaboratively, or just looking at it going up in real-time, meaning somebody else is working out, should get myself inspired to do some as well.
Please leave your feedback and yeah you can join the Push Up Club anytime :D.
I built a website that lets you browse Pokemon ENS (Ethereum Name Service) names, view their registration statuses and recent sales. It's a small but engaged niche
Building a better way to make design comparisons for electronics engineers. Starting with ceramic capacitors for now but expanding to other components types soon. www.get-merlin.com
Working on https://videotobe.com a audio/video transcription service.
VideoToBe started as a user friendly Whisper wrapper — but is evolving into a full pipeline that extracts, summarizes, and structures insights from multimedia content.
An implementation of statecharts. I'm working through core functionality using recursive algorithms.
I discovered that "least common ancestor" boils down to the intersection of 'root-path' sets, where you select the last item in the set as the 'first/least common ancestor'.
A React Three Fiber-based avatar engine with speech synthesis, voice automation, and real-time interaction with tools such as Home Assistant, n8n, and LLMs.
Building a new layer of hyper-personalization over the web.
Instead of generating more content, it helps you reformat and interact with what already exists, turning any page, paper, or YouTube video into a summary, mind-map, podcast, infographic or chat.
The broader idea is to make the web adaptive to how each person thinks and learns.
Working on my lisp. I recently added delimited continuations, even wrote a blog post about it. Now I'm working on adding more control primitives. Just finished researching generators. I'm going to implement them as a separate interpreter of sorts.
The main idea is to bring as many of the agentic tools and features into a single cohesive platform as much as possible so that we can unlock more useful AI use-cases.
On-site surveys for eCommerce and SaaS. It's been an amazing ride leveling up back and forth between product, design, and marketing. Marketing is way more involved than most people on this site realize...
-Many say they want to stop doomscrolling and clout-chasing but I don't know how many are actually willing to do so
-Individuals may move here but their friends won't. So the feed will be initially empty by design. Introducing any kind of reward is against our ethos so we are clueless about how to convince existing friend circles to move.
This may work in your favour - it's one of the reasons I enjoy Mastodon so much - friction is/was a little higher which kept my network small but focussed
Headbang, a rhythm game that you play by bobbing your head while wearing Airpods while listening to music, is what I'm considering building next. The idea came from someone else using Airpods to create a racing game (RidePods).
Headbang concept sounds really fun, I'd love to play that as a fan of rhythm games, but wow my neck would hate me I think. I am no George Fisher in that regard
I am working on a paper about solving the Royal Game of Ur, one of the world's oldest board games. We solved it a while ago, and are now trying to get more formal about it (https://royalur.net/solved).
Hmm, a personal assistant of sorts that does evaluation of you to get to the bottom of who you really are. For very obvious reasons, it is a local only project and not exactly intended for consumption.
Beyond that, just regular random stuff that comes up here and there, but, for once, my hdd with sidelined projects is slowly being worked through.
An application that helps deaf and nonverbal individuals with daily interactions when they’re out and about.
My first career was in sales. And most of the time these interactions began with grabbing a sheet of paper and writing to one another. I think small LLMs can help here.
Currently making use of api’s but I think small models on phones will be good enough soon. Just completed my MVP.
I'm working on a workout tracker that you can actually use for things like TRX and gymnastic rings. Along with normal workouts too. Let me know if there's anything you'd like on there. https://gravitygainsapp.com/
What do you guys think of this? https://www.textaurant.app. It's an AI "agentic" SMS ordering system that's hopefully better than Taco Bell's attempt... I got sick of navigating every restaurants nuanced online order placing and figured I'd try to standardize it myself with an SMS based assistant (yes I'm aware of the XKCD). The idea is every restaurant would have their own number, or down the road I could have a single number for all restaurants but I'm somewhat token/context limited right now. It uses GPT 4o and I've been working on it for the past 4 months. Closed source for now but who knows I might open it up but I'm deciding if it's worth trying to patent.
Writing a course for a customer on how to use Claude Code well, especially around brownfield development (working on existing code bases, not so much around vibe-coding something new).
Lovely interface. This is quite impressive. I can't seem to get a terminal running though. Can I actually execute scripts here? I opened code and created a hello.py, terminal did not come up in Code either.
This is a job board for AI jobs and companies. The job market in AI is pretty hot right now, and there are a lot of cool AI companies out there. I'm hoping to connect job seekers with fast-growing AI companies.
A mobile app that checks my email to find and extract family-related events/activities. The kind of things that are buried in a 12-point bullet list with font 8, inside of one of 10 school email messages received during the week
It runs fully on-device, including email classification and event extraction
AppGoblin is a free place to do app research for understanding which apps use which companies to monetize, track where data is sent and what kinds of ads are shown.
I'm building a mod for the game Subway Builder (http://subwaybuilder.com) that lets me undo/redo individual stations and tracks, instead of clearing all blueprints.
This is built with Rust, egui and SQLite3. The app has a downloader for NSE India reports. These are the daily end of day stock prices. Out of the box the app is really fast, which is expected but still surprises me. I am going to work on improving the stocks chart. I also want to add an AI assisted stocks analyst. Since all the stocks data is on the SQLite3 DB, I should be able to express my stocks screening ideas as plain text and let an LLM generate the SQL and show me in my data grid.
It was really interesting to generate it within 3 days. I had just a few places where I had to copy from app (std) log and paste into my prompt. Most of the time just describing the features was enough. Rust compiler did most of the heavy lifting. I have used a mix of Claude Code and OpenCode (with either GLM 4.5 or Grok Code Fast 1).
I have been generating full-stack web apps. I built and launched https://github.com/brainless/letsorder (https://letsorder.app/). Building full-stack web apps is basically building 2 apps (at a minimum) so desktop apps are way better it seems.
In the long-term, I plan to build and help others generated apps. I am building a vibe coding platform (https://github.com/brainless/nocodo). I have a couple early stage founders I consult for who take my guidance to generate their products (web and mobile apps + backend).
I should also point out - if you download the current version, you should immediately apply the update that will pop up. And even then, you're results may be flakey.
While working on Shelvica, a personal library management service and reading tracker, I realized I needed a source of data for book information, and none of the solutions available provided all the data I needed. One might provide the series, the other might provide genres, and yet another might provide a cover with good dimensions, but none provided everything.
So I started working on Librario, an ISBN database that fetches information from several other services, such as Hardcover.app, Google Books, and ISBNDB, merges that information, and return something more complete than using them alone. It also saves that information in the database for future lookups.
You can see an example response here[1]. Pricing information for books is missing right now because I need to finish the extractor for those, genres need some work[2], and having a 5 months old baby make development a tad slow, but the service is almost ready for a preview.
The algorithm to decide what to merge is the hardest part, in my opinion, and very basic right now. It's based on a priority and score system for now, where different extractors have different priorities, and different fields have different scores. Eventually, I wanna try doing something with machine learning instead.
I'd also like to add book summaries to the data somehow, but I haven't figured out a way to do this legally yet. For books in the public domain I could feed the entire book to an LLM and ask them to write a spoiler-free summary of the book, but for other books, that'd land me in legal trouble.
Oh, and related books, and things of the sort. But I'd like to do that based on the information stored in the database itself instead of external sources, so it's something for the future.
Last time I posted about Shelvica some people showed interest in Librario instead, so I decided to make it something I can sell instead of just a service I use in Shelvica[3], hence why I'm focusing more on it these past two weeks.
[2]: In the example you'll see genres such as "English" and "Fiction In English", which is mostly noise. Also things like "Humor", "Humorous", and "Humorous Fiction" for the same book.
[3]: Which is nice, cause that way there are two possible sources of income for the project.
Working on https://JobBoardSearch.com a meta directory of job boards helping job site owners with their DR, visibility, jobs cross posting and promoting in general
Currently running some finetuning experiments on non-verbal sounds to teach TTS how to laugh. I have had some success to add the necessary tags and tokens to multiple systems, but assembling the necessary dataset with sufficient quality is hard.
Working on an AI governance and security platform that gives security and GRC visibility into what AI tools people are actually using but also what is going into them.
It's a browser extension right now and the platform integrates with SSO providers and AI APIs, to help discover shadow AI, enforce policies and creates audit trails. Think observability for AI adoption but also Grammerly since we help coach endusers to better behavior/outcomes.
Early days but the problem is real, have a few design partners in the F500 already
Camera Search (camerasearch.ai) is my iOS app for tradespeople and DIY users. It combines voice, video, image understanding, and chat—backed by tuned LLM API—to help diagnose issues and guide builds/repairs in realtime.
a tool to help California home owners to lower their property taxes.
This works for people who bought in the past years low interest environment and are overpaying in taxes because of that.
Feel free to email me, if you have questions: phl.berner@gmail.com
I just tried your app and after providing my email the analysis I get is for a completely different address than what I entered. I tried twice just to make sure the address i entered was right.
- hiragana / katakana / number / time reading quizzes
- vocabulary quizzes based on wordlists you define and build
- learn and practice kanji anki-style (using FSRS algo)
- the coolest feature (imo) is a "reader": upload Japanese texts (light novels, children's books, etc), then translate them to your native language to practice your reading comprehension. Select text anywhere on the page (with your cursor) to instantly do a dictionary lookup. A LLM evaluates your translation accuracy (0..100%) and suggests other possible interpretations.
I just revamped the UI look and feel the other day after implementing some other user feedback! I'm now exploring ads as a way to monetize it.
I'm working on ServBay, a local development environment I built to end the constant pain of juggling different versions of program languages like Python, PHP, Node.js, Golang, Rust and so on, databases, and local SSL certificates.
It's an all-in-one toolkit with one-click version switching, automatic HTTPS for local domains, and an integrated mail catcher.
I've just rolled out some major updates:
1. Local AI Deployment: Now can run models like Llama 3 & Code Llama directly within ServBay.
2. Built-in Tunneling: Share the local sites with anyone on the internet, ngrok-style or frp or Cloudflare.
3. Windows is Live! The new Windows version is out and quickly reaching feature parity with our macOS app.
Next up is ServBay 2.0. I'm currently gathering feedback on features like deeper Docker integration and more flexible site configurations. I'd love to hear what the HN community thinks is important.
Working on dev tools for MCP servers. As a building block I recently published a library to help write tests for MCPs - https://facetlayer.github.io/expect-mcp/
Working on securing software against backdoors and hidden exploits using a set of debloating tools. First one available here: github.com/negativa-ai/BLAFS
Any chance you'll take a look at power tools next?
There are some Amish people who rebuild Dewalt, Milwaukee etc battery packs. I'd like a repairable/sustainable platform where I can actually check the health of the battery packs and replace worn out cells as needed.
To give you an idea of the market, original batteries are about $149, and their knockoffs are around $100.
Very nice, looking forward to a deal with Décath' ;) How hard is it to make it compatible with the various motors when there is communication involved?
I've been wondering for a while if the display on ebikes could also be a more open and durable part of it.
(you could fix your link so it's clickable)
1. thanks for building this. I will get back on my iron deficiency diet. I now understand it takes over 7 weeks to reliably fix
2. when doing data input, I'm lazy, especially for the blood age calc. So my process is: upload list + my blood results to the LLM and spit out the list of values I need (terrible privacy job right here for me) but anyway, I wonder if you could offer another route for data input, like a text field, with the full list and empty values, that I could copy to an LLM and ask to populate with my results and then spit back to paste into the form.
Keep up the good work!
A little library to define functions in English (through LLM of course; for TypeScript initially) and use these functions like ordinary (async) functions (calling & be called). Agents as functions and multi-step concurrent orchestration of agents with event loops, if fanciness is in order.
And an agentic news digest service which scrapes a few sources (like HackerNews) for technical news and create a daily digest, which you can instruct and skew with words.
A fucking 16bit robot controller in the Australian outback. Because the contractor who provided the robot didn't provide a proper PLC, nor a controller. And hitting your target with 16bit on 80m sounds like a moonlander
Working on a original algorithm to explain human behavior from 3rd person perspective (1st stage). The whole research is divided into 6 stages.
In 2nd stage, I will mathematically establish the best course of action as an individual given the base theory.
In 3rd stage, I will explain common psychological phenomenon through the theory, things like narcissism, anxiety, self-doubt, how to forgive others, etc.
In 4th stage, I will explain how the theory is the fastest way to learn across multiple domains and anyone can become a generalist and critical thinker.
In 5th stage, I will explain how society will unfold if everyone can become generalist and critical thinker through the theory. And how this is the next big societal breakthrough like Industrial revolution.
In 6th and last stage, I will think about how to use this theory to make India the next superpower, as this theory can give us the demographic advantage.
trying to build some opportunity for the VR/XR community with https://vr.dev
right now, it’s a better way to showcase your really specific industry skills and portfolio of 3D assets (i.e., “LinkedIn for VR/XR) with hiring layered on
starting to add onto the current perf analysis tools and think more about how to get to a “lovable for VR/XR”
I've struggled with adding evals to my AI agents for last few months, and felt that vibe evals should have a path to building a robust system down the line.
Working on a plugin for langfuse to create evals functions and dataset from ingested traces automatically, based on ad-hoc user feedback.
I'm working on mTOR (https://mtor.club), a free, science-based workout tracker I built to automate progressive overload. It's a local-first PWA that works completely offline, syncs encrypted between your devices using, passwordless passkeys, and allows for plan sharing via a simple link.
The core idea is to make progression easier to track and follow. After a workout, it analyzes your performance (weight, reps, and RIR), highlights new personal records (PRs), and generates specific targets for your next session. It also reviews your entire program to provide scientific analysis on weekly volume, frequency, and recovery for each muscle group. This gets displayed visually on an anatomy model to help you learn which muscles are involved, and you can track your gains over time with historical performance charts for each exercise.
During a workout, you get a total session timer, an automatic rest timer, and can see your performance from the last session for a clear target to beat. It automatically advances to the next incomplete exercise, and when you need to swap an exercise, it provides context-aware alternatives targeting the same muscles.
It's also deeply customizable:
- The UI has a dark theme, supports multiple languages (English, Spanish, German), lets you adjust the UI scale, and toggle the visibility of detailed muscle names, exercise types, historical performance badges, and a full history card.
- You can set global defaults for weight units (kg/lbs), rest times, and plan targets, or enable/disable metrics like Reps in Reserve (RIR) and estimated 1-Rep Max. The exercise library can be filtered by your available equipment, you can create your own custom exercises with global notes, and there's a built-in weight plate calculator.
- The progression system lets you define default rep ranges and RIR targets, or create specific overrides for different lifts (e.g., a 3-5 rep range for strength, 10-15 for accessories).
- Editing is flexible: you can drag-and-drop to reorder days, exercises, and sets, duplicate workout days, track unilateral exercises (left/right side), and enter data with a quick wheel picker.
Currently building out an application to help me allocate project funding in an academic setting to contracts and scientific staff. It'll be for internal use first, depending on my motivation I might release it at one point.
The main features will be the management of split contracts (30% project A, 70% project B), pay grade progressions (German system), handling of unique spending and budget requirements from funding agencies (also for now only Germany + EU Funding), and reporting features for internal planning.
I am not a computer scientist, so we will see how this goes and if it can replace the currently used disgusting excel sheet.
YouTube's algorithm is all about engagement - more video game videos, more brainrot, their algorithm doesn't care about the content as long as the kid is watching.
My system allows parents to define their children's interests (e.g., a 12-year-old who enjoys DIY engineering projects, Neil deGrasse Tyson, and drawing fantasy figures)
.. and specify how the AI should filter video candidates (e.g., excluding YouTube Shorts).
Periodically, the system prompts the child with something like
"Tell me about your favorite family vacation."
And their response to that prompt provides the system with more ideas and interests to suggest videos to them.
email me if you'd like to test jim.jones1@gmail.com
iOS/Mac app for learning Japanese by reading, all in one solution with optional Anki integration
I went full-time on this a couple years ago. I’m now doing a full iOS 26 redesign, just added kanji drawing, and am almost done adding a manga mode via Mokuro. I’m also preparing influencer UGC campaigns as I haven’t marketed it basically at all yet.
https://revise.io - a new word processor with live collaboration, git-like revision history, and an AI agent like Cursor.
Basically, an agentic platform for working with rich text documents.
I’ve been building this solo since May and having so much fun with it. I created a canvas renderer and all of the word processor interactions from scratch so I can have maximum control over how things are display when it comes to features like AI suggestions and other more novel features I have planned for the future.
Working on revamping our calculator page on Levels.fyi to make it more useful to see refreshers and stock growth over time. Check it out at https://levels.fyi/calculator/
Kind of have been wasting time with Cloudflare workers engine. Trying to build a system that schedules these workers for a lightweight alternative to GitHub actions. If you are interested in WASM feel free to reach out. Looking to connect with other developers working on the WASM space.
I think getting a clear picture about what it is about yourself that needs work is actually a lot of the real work. Much of the rest of it is picking a direction and then living in that direction.
I am still [0] working on trying to recover who I was before whatever -- a couple of years ago -- rendered me progressively unable to concentrate on anything.
Last month was an improvement. This month I can't concentrate for long and I distract very easily, but I seem to be able to do more with what I have, A small sense of ambition that I might be able to do bigger things, and might not need to drop out of tech and get a simple job, is returning.
I am trying to use this inhibited, fractured state to clarify thoughts about useless technology and distractions, and about what really matters, because (without wishing to sound haughty) I used to be unusually good at a lot of tech stuff, and now I am not. It is sobering but it is also an insight into what it might be like to be on the outside of technology bullshit, looking in.
Working on https://run-phx.com
... a guide to trail running in the Valley of the Sun with notable routes, curated by actual human beings in the running community. (whoa)
Not earth shattering, but something that should exist.
To prove my expertise in anything from infrastructure, over backend to frontend, I learned how to use terraform to provision a managed kubernetes cluster (on oracle clouds excellent free forever tier).
I am currently developing a web app consisting of a spring/kotlin backend for an angular frontend that is meant to provide a UI for kubectl. It has oAuth login and allows you to store several kubernetes configs, select which one to use and makes it unnecessary to remember all the kubectl commands I can never remember.
It's what I'd like to have if I had to interact with a kubernetes cluster at work. Yes, I know there are several kubernetes UIs already, but remember, this is for 1) learning and 2) following through and completing a project at least somewhat.
I have been trying to study Chinese on my own for a while now and found it very frustrating to spend half the time just looking for simple content to read and listen to. Apps and websites exist, but they usually only have very little content or they ramp up the difficulty too quickly.
Now that LLMs and TTS are quite good I wanted to try it out for languages learning. The goal is to create a vast number of short AI-generated stories to bridge the gap between knowing a few characters and reading real content in Chinese.
Curious to see if it is possible to automatically create stories which are comfortable to read for beginners, or if they sound too much like AI-slop.
Custom rack mount enclosure for the low-cost metal 3D printer controller, and debugging slicer software in Blender geometry nodes. Had to abandon classic CAM format export as the complexity of the tooling ballooned for various reasons.
Still reducing design costs of a micro positing stage for hobbyists. I observed the driver motion was mostly synchronous and symmetric... Accordingly, given the scale only a single multiplexed piezoelectric actuator motor driver was actually needed, and cut that part of the design cost by 75%.
Still designing various test platforms to validate other key technologies. Sorry, no spoilers =3
Currently I've been working on a CLI tool [1] for my WebASM UI library [2] with the idea that all the gluecode generating stuff is abstracted away in nice CLI wizards.
Essentially like yeoman back then, to bootstrap your webapp and all the necessary files more easily.
Currently I am somewhat stuck because of Go's type system, as the UI components require a specific interface for the Dataset or Data/Record entries.
For example, a Pie chart would require a map[string]number which could be a float, percentage string or an integer.
A Line chart would require a slice of map[string]number, where each slice index would represent a step in the timeline.
A table would require a slice of map[string]any where each slice index would represent a step in the culling, but the data types would require a custom rendering method or Stringifier(?) of sorts attached to the data type. So that it's possible to serialize or deserialize the properties (e.g. yes/no in the UI meaning true/false, etc).
As I want to provide UI components that can use whatever struct the developer provides, the Go way would be to use an interface. But that would imply that all data type structs on the backend side would have this type of clutter on them attached.
No idea if something like a Parser and Stringifier method definition would make more sense for the UI components here...or whether or not it's better to have something like a Render method attached per component that does all the stringifying on a per-property basis like a "func(dataset any, index int, column string) string" where the developer needs to do all the typecasting manually.
Manual typecasting like this would be pretty painful as components then cannot exist in pure HTML serialized form, which is essentially the core value proposition of my whole UI components framework.
An alternative would be offering a marshal/unmarshal API similar to how JSON does it, but that would require the reflect package which bloats up the runtime binary by several MB and wouldn't be tinygo compatible, so I heavily would wanna avoid that.
Currently looking for other libraries and best practices, as this issue is really bugging me a lot in the app I'm currently building [3] and it's a pretty annoying type system problem.
Feedback as to how it's solved in other frameworks or languages would be appreciated. Maybe there's an architectural convention I'm not aware of that could solve this.
I have been building OpenRun, a declarative web app deployment platform https://github.com/openrundev/openrun. It is an open source alternative to Google Cloud Run and AWS App Runner, running on your own hardware.
OpenRun allows defining your web app configuration in a declarative config using Starlark (which is like a subset of Python). Setting up a full GitOps workflow is just one command:
This will set up a scheduled sync, which will look for new apps in the config and create them. It will also apply any config updates on existing apps and reload apps with the latest source code. After this, no further CLI operations are required, all updates are done declaratively. For containerized apps, OpenRun will directly talk to Docker/Podman to manage the container build and startup.
There are lots of tools which simplify web app deployment. Most of them use a UI driven approach or an imperative CLI approach. That makes it difficult to recreate an environment. Managing these tools when multiple people need to coordinate changes is also difficult.
Any repo which has a Dockerfile can be deployed directly. For frameworks like Streamlit/Gradio/FastHTML/Shiny/Reflex/Flask/FastAPI, OpenRun supports zero-config deployments, there is no need to even have a Dockerfile. Domain based deployment is supported for all apps. Path based deployment is also supported for most frameworks, which makes DNS routing and certificate management easier.
OpenRun currently runs on a single machine with an embedded SQLite database or on multiple machines with an external Postgres database. I plan to support OpenRun as a service on top of Kubernetes, to support auto-scaling. OpenRun implements its own web server, instead of using Traefik/Nginx. That makes it possible to implement features like scaling down to zero and RBAC. The goal with OpenRun is to support declarative deployment for web apps while removing the complexity of maintaining multiple YAML config files. See https://github.com/openrundev/openrun/blob/main/examples/uti... for an example config, each app is just one or two lines of config.
OpenRun makes it easy to set up OAuth/OIDC/SAML based auth, with RBAC. See https://openrun.dev/docs/use-cases/ for a couple of use cases examples: sharing apps with family and sharing across a team. Outside of managed services, I have found it difficult to implement this type of RBAC with any other open source solution.
I'm working on a set of TypeScript libraries to make it really really easy to spin up an agent, or an chatbot, or pretty much anything else you want to prototype. It's based around sensible interfaces, and while batteries are included, they're also meant to be removed when you've got something you want.
The idea is that a beginner should be able to wire up a personally useful agent (like a file-finder for your computer) in ten minutes by writing a simple prompt, some simple tools, and running it. Easy to plugin any kind of tracing, etc you want. Have three or four projects in prod which I'll be switching to use it just to make sure it fits all those use-cases.
But I want to be able to go from someone saying "can we build an agent to" to having the PoC done in a few minutes. Everything else I've looked at so far seems limited, or complicated, or insufficiently hackable for niche use-cases. Or, worse of all, in Python.
I'm working on a card game for android, it's being built with Monogame and C#. It's just go fish at the moment, but I'm thinking of expanding it into a full suite of card games like solitaire and poker. The source is available on GitHub if anyone wants to poke around and perhaps collaborate. https://github.com/joshsiegl1/GoFishRefresh
A tool for threshold signing software releases that I eventually want to integrate with SigStore, etc. to help folks distribute their code-signing. https://github.com/soatok/freeon
-----
Want E2EE for Mastodon (and other ActivityPub-based software), so you can have encrypted Fediverse DMs? I've been working on the public key transparency aspect of this too.
It's an AI-webapp builder with a twist: I proxy all OpenAI API calls your webapp makes and charge 2x the token rate; so when you publish your webapp onto a subdomain, the users who use your webapp will be charged 2x on their token usage. Then you, the webapp creator, gets 80% of what's left over after I pay OpenAI (and I get 20%).
It's also a fun project because I'm making code changes a different way than most people are: I'm having the LLM write AST modification code; My site immediately runs the code spit out by the LLM in order to make the changes you requested in a ticket. I blogged about how this works here: https://codeplusequalsai.com/static/blog/prompting_llms_to_m...
I'm building an open source NAT traversal and networking framework called P2PD. Built from the ground up to allow things like multi-network interface applications, improved network programming in Python, and if people want it: an easy way to bypass NATs. The thing is: it depends on public servers for some of this which tends to change a lot, causing errors when they're all down.
What I'm building at the moment is a server monitoring solution for STUN, TURN, MQTT, and NTP servers. I wanted to allow the software for this to be portable. So I wrote a simple work queue myself. Python doesn't have linked-lists which is the data structure I'm using for the queues. They allow for O(1) deletes which you can't really get on many Python data structures. Important for work items when you're moving work between queues.
For the actual workers I keep things very simple. I make like 100 independent Python processes each with an event loop. This uses up a crap load of memory but the advantage is that you can parallel execution without any complexity. It would be extremely complex trying to do that with code alone and asyncio's event loop doesn't play well with parallelism. So you really only want one per process.
Result: simple, portable Python code that can easily manage monitoring hundreds of servers (sorry didnt mean for that to sound like chatgpt, lmao, incidental.) The DB for this is memory-based to avoid locking issues. I did use sqlite at first but even with optimizations there were locking issues. Now, I only use sqlite for import / export (checksums.)
yet another nvr (in python). also trying to make a switch for hpm style rocker light switches. these things are devilish. the switch requires a lot of force at a strange angle but i dont want to break it so knowing nothing about mechanical stuff ive had to learn about slip clutches, idling gears, worm gears, ratchet wheels. rack and pinions (ofc. from a hobbyist perspective). i know theres a switchbot and a fingerbot but neither of those will work with that type of switch unless you tape some sort of torque lever onto the light (which i dont want to do). its a rabbit hole :/
Here's a breakdown of the projects people are working on, with AI-related projects in their own category:
## AI-Related Projects
* *[justinc8687] Migraine Tracker:* This project aims to help users track their migraines using voice input, with the goal of analyzing unstructured data with AI to find root causes. It uses Deepgram for transcription and an LLM for analysis, with a "chat with your data" feature.
* *[dcheong] User Mastery:* A platform for product teams to manage updates, changelogs, roadmaps, documentation, and feedback, utilizing AI to assist.
* *[jared_stewart] Survey Response Automation:* Using LLMs to automate the processing of parent survey responses for a school, aiming for consistent summarization and statistics.
* *[codybontecou] Voice-Script:* A tool that allows users to discuss and generate GitHub issues, pull requests, and code diffs using ChatGPT's voice agents.
* *[conditionnumber] LLM for Data Matching:* Proposes using an LLM to score and match candidates identified by a tool like "jellyjoin," reducing a large number of potential matches to a manageable set for AI analysis.
* *[taherchhabra] Infinite Canvas for AI Generation:* A platform for AI image, video, audio, and 3D generation, designed to help create cohesive stories with consistent characters and locations.
* *[chipotle_coyote] Story Theory Program (Spiritual Successor to Dramatica):* Aims to create a story theory and brainstorming program, drawing inspiration from Dramatica but incorporating modern concepts, and potentially using AI for some aspects.
* *[rhl314] Magnetron (Whiteboard Explainers):* An AI-powered tool that generates whiteboard explainer videos from prompts or documents, using AI for design, animations, and voiceovers.
* *[adamsaparudin] AI SaaS Workflow:* A project focused on enabling users to launch their own AI SaaS applications quickly, abstracting away complexities like user management and billing.
* *[garbage] Dreamly.in (AI Bedtime Stories):* An automated, personalized, and localized bedtime story generator for children, using AI to create stories based on child profiles and themes.
* *[nowittyusername] Metacognitive AI System:* This project focuses on creating an AI agent with multiple specialized LLMs that can reason, analyze, and communicate internally to provide more sophisticated responses to humans, rather than just acting as a simple chatbot.
* *[fjulian] Veila (Privacy-First AI Chat):* A privacy-focused AI chat service that uses a proxy to prevent user profiling and offers end-to-end encrypted history, allowing users to switch models mid-chat.
* *[ai-christianson] Gobii Platform (Open-Source AI Employees):* Browser-based AI agents that can log into real websites, fill out forms, and adapt to changes, functioning as "browser-use cloud" employees.
* *[apf6] Dev Tools for MCP Servers:* Building libraries to help write tests for MCP (Model-Centric Programming) servers, focusing on AI-related development.
* *[mfrye0] Plaid/Perplexity for Business Data:* Creating composable deep research APIs powered by a business graph and web search index to integrate business data into applications and AI agent processes.
* *[vishakh82] Monadic DNA Explorer:* A tool to explore genetic traits from GWAS Catalog and user DNA data, with AI insights run locally in a TEE (Trusted Execution Environment).
* *[jerrygoyal] JetWriter.ai:* A Chrome extension that uses AI to assist with tasks on any website, such as chatting with pages, fixing grammar, replying to emails, translating, and summarizing.
* *[chadwittman] Eldrick.golf (AI Golf Club Fitter):* An AI-powered golf club fitting tool that aims to rival human professionals in custom club fitting.
* *[jiffylabs] AI Governance and Security Platform:* A platform and browser extension to provide visibility into AI tool usage within organizations, discover shadow AI, enforce policies, and create audit trails. It also acts as a coach for end-users.
* *[aantix] Alternative YouTube App for Kids:* An app that uses AI to filter YouTube videos based on parental-defined interests and prompts children for input to discover new interests, moving away from engagement-driven algorithms.
* *[qwikhost] Video AI Editor:* A tool for editing videos using AI.
* *[accountisha] CPA Exam Prep Tool:* A system that generates word problems and step-by-step solutions to help individuals prepare for the American CPA exams.
* *[felixding] Kintoun.ai:* A simple document translator that preserves file formatting and layout, likely using AI for translation.
* *[skyfantom] LLM + Stocks Market Analysis:* Experimenting with LLMs for stock market analysis and comparing different models for their effectiveness.
* *[braheus] English-to-Function Definition (LLM):* A library that allows defining functions in English using an LLM, which can then be used like regular TypeScript functions, enabling agentic orchestration.
* *[gametorch] AI Sprite Animator:* An AI-powered tool for animating sprites in 2D video games.
* *[sab_hn] Endless Chinese:* An AI-generated story platform for learning Chinese, aiming to create a vast number of short stories for beginners.
* *[asdev] FleetCode (Coding Agent Control Panel):* An open-source control panel for running coding agents in parallel.
* *[trogdor] AI Document Summarization/Analysis:* A tool that uses AI to analyze documents and provide summaries, potentially for research or other forms of content consumption.
* *[osint.moe] LLM-Powered OSINT Helper:* An app that uses LLMs to build an interactive research graph for Open Source Intelligence (OSINT) gathering.
* *[kintoun.ai] Document Translator:* A tool that translates documents while preserving formatting and layout, likely leveraging AI.
* *[mclaren] AI-powered code generation and analysis tools.*
* *[skanga] Conductor (LLM-Agnostic Framework):* A framework for building sophisticated AI applications using a subagent architecture, inspired by concepts of "The Rise of Subagents."
* *[ashdnazg] Palindrome Finding (CUDA):* Porting code to CUDA to find palindromes, with a focus on GPU optimization and exploring new elements in number series.
* *[veesahni] AI in Customer Communications:* Exploring effective, hype-free usage of AI in customer communications.
* *[cryptoz] Code+=AI (AI Webapp Builder):* A platform for building AI web apps where API calls are proxied, and users are charged for token usage, with creators earning a percentage of the revenue. The LLM is also used to modify code.
* *[exasperaited] Recovering from Cognitive Impairment:* Using AI tools to help clarify thoughts and potentially recover cognitive abilities lost due to a past event.
* *[waxycaps] CEO Replacement:* A project related to AI that has the goal of replacing a CEO.
* *[vladoh] Simple Photo Gallery (V2):* While not AI-specific, the mention of a future SaaS offer for users who don't want to self-host suggests potential for AI-driven features in the future.
* *[dheera] Invoice Generators for "Inconvenience Fees":* While not directly AI, the idea of invoicing for "inconvenience fees" could be an interesting application for AI to determine and quantify such fees.
* *[yomismoaqui] HN Post/Comment Analyzer:* A website for analyzing posts and comments on Hacker News, potentially using AI to filter or summarize content.
* *[ce0.ai] CEO Replacement:* A project explicitly stating it's about replacing a CEO with AI.
* *[robinsloan] Home-cooked App Essay Inspiration:* While not directly an AI project, the mention of this essay and the focus on personal apps could lead to AI-integrated personal tools.
* *[zuhayeer] Levels.fyi Calculator Revamp:* Focusing on improving a calculator page for refreshers and stock growth, which could involve AI for analysis or predictions.
* *[lukehan] AI Data Enrichment Platform:* A platform to help users enrich their data so AI, like ChatGPT, can understand it better, measured by an "AI Understanding Score."
* *[asimovDev] Sound Blaster Command Control:* While primarily reverse engineering, the mention of "creative's multiplatform solutions" could imply future AI integration for smarter control.
* *[daveevad] "Myself, myself needs work":* This self-reflection could involve AI tools for personal development or understanding oneself better.
* *[thenipper] Campaign Management App for TTRPGs:* While primarily a wiki-like app, the potential for AI to assist in game mastering or content generation is present.
* *
new PostgreSQL index type which outperforms B-Trees for many common cases. As a wild experiment, I'm entirely vibe coding this and not hand-writing it.
It works by specializing for the common case of read-only workloads and short, fixed-length keys/includes (int, uuid, text<=32b, numeric, money, etc - not json) and (optionally) repetitive key-values (a common case with short fixed-length keys). These kinds of indexes/tables are found in nearly every database for lookups, many-many JOIN relationships, materialized views of popular statistics, etc.
Currently, it's "starting to work" with 100% code coverage and performance that usually matches/beats btree in query speed. Due to compression, it can consume as little as 99.95% less memory (!) and associated "pressure" on cache/ram/IO. Of course, there are degenerate cases (e.g. all unique UUID, many INCLUDEs, etc) where it's about the same size. As with all indexes, performance is limited by the PostgreSQL executor's interface which is record-at-a-time with high overhead records. I'd love help coding a FDW which allows aggregates (e.g. count()) to be "pushed down" and executed in still requires returning every record instead of a single final answer. OTT help would be a FDW interface where substantial query plans could be "pushed down" e.g. COUNT().
The plan is to publish and open source this work.
I'd welcome collaborators and have lots of experience working on small teams at major companies. I'm based in NYC but remote is fine.
- must be willing to work with LLMs and not "cheat" by hand-writing code.
- Usage testing: must be comfortable with PostgreSQL and indexes. No other experience required!
- Benchmarking, must know SQL indexes and have benchmarking experience - no pgsql internals required.
- For internals work, must know C and SQL. PostgreSQL is tricky to learn but LLMs are excellent teachers!
- Scripting code is in bash, python and Makefile, but again this is all vibe coded and you can ask LLMs what it's doing.
- any environment is fine. I'm using linux/docker (multi-core x86 and arm) but would love help with Windows, native MacOS and SIMD optimization.
- I'm open to porting/moving to Rust, especially if that provides a faster path to restricted environments like AWS RDS/Aurora.
- your ideas welcome! but obviously, we'll need to divide and conquer since the LLMs are making rapid changes to the core and we'll have to deal with code conflicts.
Last year, PlasticList found plastic chemicals in 86% of tested foods—including 100% of baby foods they tested. Around the same time, the EU lowered its “safe” BPA limit by 20,000×, while the FDA still allows levels roughly 100× higher than Europe’s new standard.
That seemed solvable.
Laboratory.love lets you crowdfund independent lab testing of the specific products you actually buy. Think Consumer Reports × Kickstarter, but focused on detecting endocrine disruptors in your yogurt, your kid’s snacks, or whatever you’re curious about.
Find a product (or suggest one), contribute to its testing fund, and get full lab results when testing completes. If a product doesn’t reach its goal within 365 days, you’re automatically refunded. All results are published publicly.
We use the same ISO 17025-accredited methodology as PlasticList.org, testing three separate production lots per product and detecting down to parts-per-billion. The entire protocol is open.
Since last month’s “What are you working on?” post:
- 4 more products have been fully funded (now 10 total!)
- That’s 30 individual samples (we do triplicate testing on different batches) and 60 total chemical panels (two separate tests for each sample, BPA/BPS/BPF and phthalates)
- 6 results published, 4 in progress
The goal is simple: make supply chains transparent enough that cleaner ones win. When consumers have real data, markets shift.
Browse funded tests, propose your own, or just follow along: https://laboratory.love
Bit confused as to your position on funding.
Here is something I'm struggling with as a user. I look at a product (this tofu for example [0]) and see the amounts. And then I have absolutely no clue what it means. Is it bad? How bad? I see nanograms one place and μg in an info menu - is μg a nanogram? And what is LOQ? Virtually 0? Simply less than the recommended amount?
I think 99% of people will have the same reaction. They will have no idea what the information means.
I clicked on some info icons to try and get more context. The context is good (explains what the different categories are) but it still didnt help me understand the amounts. I went to "About" and it didnt help with this. I went to the FAQ and and the closest I can find is:
>What makes a result 'concerning'? We don't make safety judgments. Instead, we compare results to established regulatory limits from FDA, EPA, and EFSA, noting when products exceed these thresholds. We also flag when regulatory limits themselves may be outdated based on new research.
I understand that you don't want to make the judgement and it's about transparency and getting the information. But the information is worthless if people dont know what it meant.
[0] - https://laboratory.love/product/118
2. If you find regulation-violating (or otherwise serious) levels of undesirable chemicals, do you... (a) report it to FDA; (b) initiate a class-action lawsuit; (c) short the brand's stock and then news blitz; or (d) make a Web page with the test results for people to do with it what they will?
3. Is 3 tests enough? On the several product test results I clicked, there's often wide variation among the 3 samples. Or would the visualization/rating tell me that all 3 numbers are unacceptably bad, whether it's 635.8 or 6728.6?
4. If I know that plastic contamination is a widespread problem, can I secretly fund testing of my competitors' products, to generate bad press for them?
5. Could this project be shut down by a lawsuit? Could the labs be?
1. I'm still working to make results more digestible and actionable. This will include the %TDI toggle (total daily intake, for child vs adult and USA vs EU) as seen on PlasticList, but I'm also tinkering with an even more consumer-friendly 'chemical report card'. The final results page would have both the card and the detailed table of results.
2. I have not found any regulation-violating levels yet, so in some sense, I'll cross that bridge when I get there. Part of the issue here is that many believe the FDA levels are far too relaxed which is part of why demand for a service like laboratory.love exists.
3. This is part of the challenge that PlasticList faced, and additionally a lot of my thinking around the chemical report card are related to this. Some folks think a single test would be sufficient to catch major red flags. I think triplicate testing is a reasonable balance of statistically robust while not being completely cost-prohibitive.
4. Yes, I suppose one could do that, as long as the funded products can be acquired by laboratory.love anonymously through their normal consumer supply chains. Laboratory.love merely acquires three separate batches of a given product from different sources, tests them at an ISO/IEC 17025-accredited lab, and publishes the data.
5. I suppose any project can be shut down by a lawsuit, but laboratory.love is not currently breaking any laws as far as I'm aware.
Great site!
What bugs me is that plastics manufacturers advertise "BPA-free", which is technically correct, but then add a very similar chemical from the same family that has the same effect on the plastic - which is good - but also has the same effect on your endocrine system
Here is a Stripe link: https://donate.stripe.com/9B614o4NWdhN83l9r06c001
I'll add subscriptions as a more formal option on laboratory.love soon!
Disclaimer: I don't think I can have a 365-day refund with a recurring donations like this. The financial infrastructure would add too much complexity.
I hope we can agree that we are better off than that now.
What I'm curious about is whether you think it's been a steady stream of improvements, and we just need to improve further? Or if you think there was some point between 1900 and now where food health and safety was maximized, greater than either 1900 or now, and we've regressed since then?
Or put another way: it was a simple question that the ggp can answer or not as they choose. I was just curious for their perspective.
My instinct is that things have largely gotten better over time. At a super-macro level, in 1900 we had directly adulterated food that e.g. the soldiers receiving Chicago meat called "embalmed". In the mid-20th century we had waterways that caught fire and leaded gas.
By the late 20th we had clean(er) air (this is all from a U.S. perspective) and largely safe food. I think if we were to claim a regression, the high point would have to be around 2000, but I can't point to anything specific going on now that wasn't also going on then -- e.g. I think microplastics were a thing then as well, we just weren't paying attention.
It's interesting that a bunch of the funded products have been funded by a single person.
Do you know if it's the producers themselves? Worried rich people?
I've yet to have any product funded by a manufacturer. I'm open to this, but I would only publish data for products that were acquired through normal consumer supply chains anonymously.
For example, there are two individuals who own the same $100k machine for testing the performance of loudspeakers.
https://www.audiosciencereview.com/forum/index.php
https://www.erinsaudiocorner.com/
Both of them do measurements and YouTube videos. Neither one has a particularly good index of their completed reviews, let alone tools to compare the data.
I wish I could subscribe to support a domain like “loud speaker spin tests” and then have my donation paid out to these reviewers based on them publishing new high quality reviews with good data that is published to a common store.
Me being naive, I thought “how hard could would it actually be to build a free e-sign tool?”
Turns out not that hard.
In about a weekend, I built a UETA and ESIGN compliant tool. And it was free. And it cost me less than $50. Unlimited free e-sign. https://useinkless.com/
DocuSign customers buy trust.
Free e-signatures are a great idea, have you considered getting a foundation to back the project and maybe taking out some indemnity insurance, perhaps raising a dispute fund?
its a well recognised tool for contract agreements, and you pay the money so that you are indemnified for any oopsies that might happen in transit.
https://documenso.com/
For example, 1 PCR reaction (a common reaction used to amplify DNA) costs about $1 each, and we're doing tons every day. Since it is $1, nobody really tries to do anything about it - even if you do 20 PCRs in one day, eh it's not that expensive vs everything else you're doing in lab. But that calculus changes once you start scaling up with robots, and that's where I want to be.
Approximately $30 of culture media can produce >10,000,000 reactions worth of PCR enzyme, but you need the right strain and the right equipment. So, I'm producing the strain and I have the equipment! I'm working on automating the QC (usually very expensive if done by hand) and lyophilizing for super simple logistics.
My idea is that every day you can just put a tube on your robot and it can do however many PCR reactions you need that day, and when the next day, you just throw it out! Bring the price from $1 each to $0.01 + greatly simplify logistics!
Of course, you can't really make that much money off of this... but will still be fun and impactful :)
Some things that would be cool
- Not sure if this is feasible but... reasonable cost machines to synthesize oglios?1. You can purchase gel boxes that do 48 to 96 lanes at once. I'd ideally have it on a robot whose only purpose is to load and run these once or twice a day. All the samples coming through get batched together and run
2. Bioanalyzer seems nice for quantification of like PCRs to make sure you're getting the right size. But if I'll be honest I haven't though that much about it. But qPCRs actually become very cheap, if you can keep the machines full. You can also use something like a nanodrop and it is much much cheaper
3. Pichia pastoris expression ^
4. You can use a plate reader (another thing that goes bulk nicely), but the reagents you can't really get around (but cheaper in bulk from China)
5. If you aggregate, these become really cheap. The complicated bits are getting the proper cytomat parts for shaking, as they are limited on the used market
6. These can't be automated well, so I honestly haven't thought too much about it.
7. Reagents cheaper in bulk China
8. ehhhh, maybe? But not really. But if you think about a scaled centralized system, you can get away with not using oligos for a lot of things
Anyhow good luck. Would love to follow if you do anything with this in the future. Do you have a blog or anything?
https://github.com/scallyw4g/bonsai
I also wrote a metaprogramming language which generates a lot of the editor UI for the engine. It's a bespoke C parser that supports a small subset of C++, which is exposed to the user through a 'scripting-like' language you embed directly in your source files. I wrote it as a replacement for C++ templates and in my completely unbiased opinion it is WAY better.
https://github.com/scallyw4g/poof
A project to implement 1000 algorithms. I have finished around 400 so far and I am now focusing on adding test cases, writing implementations in Python and C, and creating formal proofs in Lean.
It has been a fun way to dive deeper into how algorithms work and to see the differences between practical coding and formal reasoning. The long-term goal is to make it a solid reference and learning resource that covers correctness, performance, and theory in one place.
The project is still in its draft phase and will be heavily edited over the next few months and years as it grows and improves.
If anyone has thoughts on how to structure the proofs or improve the testing setup, I would love to hear ideas or feedback.
I don't have any feedback, but rather a question, as I've seen many repositories with people sharing their algorithms, at least on GitHub for many different languages (e.g. https://github.com/TheAlgorithms), what did you find that was missing from those repositories that you wanted to write a book and implement hundreds of algorithms, what did you find that was lacking?
No organization for learners either. It jumps straight into implementations without a logical flow from fundamentals. I want to build something more structured: start from the very foundation (like data structures, recursion, and complexity analysis), then move to classical algorithms (search, sort, graph, dynamic programming), and eventually extend to database internals, optimization, and even machine learning or AI algorithms. Basically, a single consistent roadmap from beginner to researcher level, where every algorithm connects to the next and builds intuition step by step.
Another very good resource for beginners is https://www.hello-algo.com. At first, i actually wanted to contribute there, since it explains algorithms visually and in simple language. But it mostly covers the basics and stops before more advanced or applied topics. I want to go deeper and treat algorithms as both code and theory, with mathematical rigor and formal proofs where possible. That is something I really liked about Introduction to Algorithms (CLRS) and of course The Art of Computer Programming (TAOCP) by Knuth. They combine reasoning, math, and practice. My goal is to make something in that spirit, but more practical and modern, bridging the gap between academic books and messy open source repos.
I want to change that view and show that algorithms are beautiful and useful beyond interviews. They appear everywhere, from compilers to databases to the Linux kernel, where I found many interesting data structures worth exploring. (i will share more about this topic later)
I hope to share more of these insights and connect with others who enjoy discussing real world algorithm design, which is what I love most about the Hacker News community (except for the occasional trolls that show up from time to time).
The VM and transpiler were originally implemented by hand, and later I used Codex to help polish the code. The generated output works, though it is a bit messy in places. Hopefully, after finishing a few books, I can return to the project with more experience and add better use cases for it.
I usually feel to many people wildly through around terms they hardly understand, in the belief they cannot possibly understand. That’s so wrong, every executive should understand some of what determines button line. It’s not like people skip economics because it’s hard.
Would love to perhaps contribute sometime next year. Stared and until then good luck - perhaps add a donation link!
I really like your idea of targeting executives and connecting it to real business outcomes. Getting decision makers to truly understand the fundamentals behind the technology would make a huge difference.
I feel like the presentation of Lomuto's algorithm on p.110 would be improved by moving the i++ after the swap and making the corresponding adjustments to the accesses to i outside the loop. Also mentioning that it's Lomuto's algorithm.
These comments are probably too broad in scope to be useful this late in the project, so consider them a note to myself. C as the language for presenting the algorithms has the advantage of wide availability, not sweeping performance-relevant issues like GC under the rug, and stability, but it ends up making the implementations overly monomorphic. And some data visualizations as in Sedgewick's book would also be helpful.
That said, I personally prefer Introduction to Algorithms (CLRS) for its formal rigor and clear proofs, and Grokking Algorithms for building intuition.
The broader goal of this project is to build a well tested, reference quality set of implementations in C, Python, and Go. That is the next milestone.
Your comment brought back an old memory for me. My first programming language in high school was Turbo Pascal. That IDE was amazing: instant compilation, the blue screen TUI, F1 for inline help, a surprisingly good debugger, and it just felt so smooth and fast back then. No internet needed, no AI assistance, just pure focus and curiosity. Oh, how I really miss those days :)
However, you are right, Prof. Sedgewick has long maintained versions of his material across multiple languages. I remember that the third edition has C, C++ and Java versions.
https://github.com/olooney/jellyjoin
It hits a sweet spot by being easier to use than record linkage[3][4] while still giving really good matches, so I think there's something there that might gain traction.
[1]: https://platform.openai.com/docs/guides/embeddings
[2]: https://en.wikipedia.org/wiki/Hungarian_algorithm
[3]: https://en.wikipedia.org/wiki/Record_linkage
[4]: https://recordlinkage.readthedocs.io/en/latest/
I see you saved a spot to show how to use it with an alternative embedding model. It would be nice to be able to use the library without an OpenAI api key. Might even make sense to vendor a basic open source model in your package so it can work out of the box without remote dependencies.
[1]: https://www.nomic.ai/blog/posts/nomic-embed-text-v1
[2]: https://ollama.com/search?c=embedding
If you're adding more LLM integration, a cool feature might be sending the results of allow_many="left" off to an LLM completions API that supports structured outputs. Eg imagine N_left=1e5 and N_right=1e5 but they are different datasets. You could use jellyjoin to identify the top ~5 candidates in right for each left, reducing candidate matches from 1e10 to 5e5. Then you ship the 5e5 off to an LLM for final scoring/matching.
Its built on top of Kubernetes, based on learnings I've had from previous experiences scaling infrastructure.
If you look at the markup PaaS (Heroku, Fly, Render) applies to IaaS (AWS, Hetzner), it's on the order of 5-10x. But not having that, and trying to stitch together random AWS services is a huge PITA for a medium sized engineering team (we've tried).
On top of all that, theres a whole host of benefits to being on kubernetes, namely, that you can install any helm package with one click, which Canine also manages.
A good example is Sentry -- even though it has an open source offering, almost everyone pays for the cloud version because its too scary to self host. With Canine, its a one click, and you get a sentry.your-domain.com to use for whatever you need.
Recently got a sponsorship from the Portainer team to allow me to dedicate way more time to this project, so hugely grateful to them for that.
Code: https://github.com/czhu12/canine
I'd like to think at this point (about 2 years into development) we've gotten to a place where the end user doesn't even know they are using Kubernetes.
last month’s “what are you working on” thread impulsed me to upload this game to itch and 1 month later, i’ve got a small community, lots of feedback and iterations. It brought a whole new life to a project that was on the verge of abandoning.
So, I’m really grateful for this thread. https://explodi.itch.io/microlandia
https://microlandia.tubatuba.net/simulation_details
Quite interesting details.
I wonder if you simulate at individual level or group? Would be cool at individual level each one making decisions individually and see some emerging behavior.
Also how corruption emerges in gov etc
Also if no job maybe they could try uber/food delivery crappy jobs like that or start their own business.
Maybe also less money less likely to have kids? Would be nice to show how poverty helps or not population growth. If too poor might have no education and would make kids, if average citizen and can’t save money will avoid kids. That’s why at individual level simulation could find these emerging patterns. But probably too expensive computationally ?
If you are referring to the citizens, yes, at individual level. However for traffic I'm using a sampling rate.
> Also if no job maybe they could try uber/food delivery crappy jobs like that or start their own business.
That's an awesome idea, I added it to my backlog :)
> less money less likely to have kids?
This is mega tricky, because it happens very differently across the world. Yes, can be expensive computationally that's why the city is so small (for now) but as I start to distribute the simulation into many cores, players with high core CPU will be able to choose a bigger city size :) I agree that individual level simulation is what makes it interesting and I plan to keep it like that.
I heard that the SimCity devs have had to fudge that out for gameplay's sake ever since the oldest versions
Parking space simulation is coming soon. I feel I will completely miss the point if I leave that out. The idea is to have street parking (with configurable profit for the city) parking lots, and buildings with underground parking, that should conflict, of course, with metro lines.
This weekend I have plans to start playing a lot Subway Builder (https://www.subwaybuilder.com) which I'm really excited about, and maybe get some books on the subject, in order to get it right
It's an all-in-one toolkit designed to automate the boring stuff so you can focus on flying. Core features include: automatic flight tracking that turns into a digital logbook entry, a full suite of E6B/conversion calculators, customizable checklists, and live weather decoding.
It’s definitely not a ForeFlight killer, but it's a passion project I'm hoping can be useful for other student and private pilots.
App Store: https://apps.apple.com/app/pilot-kit/id6749793975 Google Play: https://play.google.com/store/apps/details?id=club.air.pilot...
Any feedback is welcome!
It supports multiple LLM providers: OpenAI, Anthropic, xAI, DeepSeek, Gemini, OpenRouter, Z.AI, Moonshot AI, all with automatic failover, prompt caching, and token-efficient context management. Configuration occurs entirely through vtcode.toml, sourcing constants from vtcode-core/src/config/constants.rs and model IDs from docs/models.json to ensure reproducibility and avoid hardcoding. [0], [1], [2]
Recently I've added Agent Client Protocol (ACP) integration. VT Code is now a fully compatible ACP agent, works with any ACP-clients: Zed (first-class support), Neovim, marimo notebook. [3]
[0] https://github.com/vinhnx/vtcode
[1] https://crates.io/crates/vtcode
[2] https://docs.rs/vtcode
[3] https://agentclientprotocol.com/overview/agents
Thank you!
I love that you've made it open source and that it's in Rust, thanks a lot for the work!
I choose Rust since I have some familiarity and experience with it, VT Code is of course, AI-assisted, I mainly use Codex to help me build it. Thank you again for checking it out, have a great day! : )
I’m curious though, how significant do you think it is for the agent to have semantic access through Tree-sitter?
Also what model have you had the most success with ?
> I’m curious though, how significant do you think it is for the agent to have semantic access through Tree-sitter?
For this, I'm really not sure, but since the start of building VT Code. I just had this idea to use tree-sitter to assist the agent to have more (or faster/more precise) semantic understanding of the coding, instead of relying them to figure out themself. For me, naively I think this could help agent to have better language-specific and accurately decision about the workspace (context) that they are working. If not having tree-sitter, I think the agent could eventually figure out itself. For this aspect, I should be research more on this topic. In VT Code, I included 6 language: Go, Python, Rust, TypeScript, Swift... ) via rust-binding crates, mostly when you launch the vtcode agent on any workspace, It will show the main languages in the workspace right way.
> Also what model have you had the most success with ?
I'm having mainly limited-budget so I can only use OpenRouter and utilize its vast amount models support. So that I can prototype quickly, for different use-cases. For VT Code agent, I'm using mainly x-ai/grok-code-fast-1, in my experience, it most suit for building VT Code agent it self because of speeds, and versatile in function calling and have good instruction following. I also have good successes with x-ai/grok-4-fast. I have not tried claude-4.5-sonnet and gpt-5/gpt-5-codex though. I really love to run benchmarks for VT Code to see how it perform in real world coding task, I'm aiming for Aider polygot bench, terminal-bench and swe-bench-lite, it is in my plan for now in my GitHub issues.
For VT Code itself, I instruct it to strictly follow system-prompt, in which I take various inspiration from Anthropic, OpenAI and Devin guide/blogs on how to build coding agent. But, for a model-agnostic agent, the capability to support multi providers and multi models is a challenge. For this I think I need help. I'm fortunately to have support from open-source community suggesting me to use zig, I have had good success with it so far, for implement LLM calls and implement the /model picker.
Overall in my experience building VT, the most important aspect of effective coding agent is context engineering, like all big-lab has research. A good system prompt is also very important, but not context is everything. https://github.com/vinhnx/vtcode/blob/main/prompts/system.md
// Sorry, English is not my main language, so pardon the typo and grammar. Thank you!
https://www.inclusivecolors.com/
- You can precisely tweak every shade/tint so you can incorporate your own brand colors. No AI or auto generation!
- It helps you build palettes that have simple to follow color contrast guarantees by design e.g. all grade 600 colors have 4.5:1 WCAG contrast (for body text) against all grade 50 colors, such as red-600 vs gray-50, or green-600 vs gray-50.
- There's export options for plain CSS, Tailwind, Figma, and Adobe.
- It uses HSLuv for the color picker, which makes it easier to explore accessible color combinations because only the lightness slider impacts the WCAG contrast. A lot of design tools still use HSL, where the WCAG contrast goes everywhere when you change any slider which makes finding contrasting colors much harder.
- Check out the included example open source palettes and what their hue, saturation and lightness curves look like to get some hints on designing your own palettes.
It's probably more for advanced users right now but I'm hoping to simplify it and add more handholding later.
Really open to any feedback, feature requests, and discussing challenges people have with creating accessible designs. :)
https://www.inclusivecolors.com/?style_dictionary=eyJjb2xvci...
I've sorted the colors by luminance/lightness and added a gray swatch for comparison so can explore which color pairs pass WCAG contrast checks.
I haven't really gotten into colorblind safe colors like this yet where the colors mostly differ by hue and not luminance. Colorblind and non-colorblind people should be able to tell colors apart based on luminance difference i.e. luminance contrast. Hue perception is impacted by the several different kinds of color blindness so it's much trickier to find a set of colors that everyone can tell apart. This relates to the WCAG recommendation you don't rely on hue (contrast) to convey essential information (https://www.w3.org/WAI/WCAG21/Understanding/use-of-color.htm...).
The gray swatch above could be called colorblind safe for example because as long as you pick color pairs with enough luminance contrast between them, colorblind and non-colorblind people should be able to tell them apart. You could even vary the hue and saturation of each shade to make it really colorful, as a long as you don't change the luminance values the WCAG contrast between pairings should still pass.
There's so much more to do with tools like this, and I'm really glad to see it.
- Drag the hue and saturation curves to customise the tints/shades of a color. Look at the UI mockup as you do this to make sure the tints/shades look good together.
- The color pairings used in the UI mockup all initially pass WCAG contrast checks but this can break if you tweak the lightness curve of a color. The mockup will show warning outlines if this happens. Click on a warning and it'll tell you which color pairs need to have their lightness values moved further apart to fix it.
- Once you're happy, use the export menu to use your colors in your CSS or Figma designs. You can use the mockup as a guide for which color pairs are accessible for body text, headings, button outlines and so on.
Does that make more sense? You really need to be on desktop as well because the mobile UI is more of a demo.
I always learned programming and maths on my own so any advice is welcome!
The goal is to serve the laws in a format that is easy to cite, monitor, or machine-read. It should also have predictable URLs that can be inferred from the law’s name. It will also have side by side AI translations (marked as such).
I cite a lot of laws in my content and I want to automatically flag content for review when a specific paragraph of the law changes. I also want to automatically update my tax calculator when the values change.
Basically, a refresh of gestetze-im-internet.de and buzer.de.
Dunno if other governments are this Byzantine in practice (our system seems to be like... manual integration of diff patches) but it's pretty interesting and I really appreciate the work that goes into these types of things.
Where I'm from, citizens _need_ more awareness of their rights today and in the future.
https://github.com/rumca-js/Internet-Places-Database
Still crawling framework
https://github.com/rumca-js/crawler-buddy
Still RSS client
https://github.com/rumca-js/Django-link-archive
Rough idea is easy to use voice mode to record data, then analyze unstructured data with AI later on.
I want to track all relevant life information, so what I'm eating, meds I'm taking, headache/nausea levels, etc.
Adding records is as easy as pressing record on my apple watch and speaking some kind of information. Uses Deepgram for voice transcription since it's the best transcription API I've found.
Will then send all information through to a LLM for analysis. It has a "chat with your data" page to ask questions and try and draw conclusions.
Main webapp is done, now working on packaging it into an iOS app so I can pull biometrics from Healthkit. Will then look into releasing it, either on github or possibly in the app store. It's admittedly mostly vibe coded, so not sure if it'll be something releasable, but we'll see...
Let me know if this would interest anyone!
I can suggest the research papers by Markus Dahlem for some in depth modern takes on migraine.
E.g. meditation, yoga, ...
Also, a plug for Oliver Sacks's Migraine which taught me a lot about migraine with aura.
Note that even the anticipation of meeting people can be a mental load.
It uses medgemma 4B for analyzing medical images and generating diagnostic insights and reports, ofc must be used by caution, its not for real diagnostics, can be something to have another view maybe.
Currently, it supports chat and report generation, but I'm stuck on what other features to add beyond these. Also experimenting with integrating the 27B model, even with 4bit quantization, looks better than 4b.
The idea is for the game to make logical sense, but make the player sound completely unhinged from reality "I need to put the toaster on top of the oven to make the lamp spin around, that way I can move the lamp across the room near the couch to unlock the next level"
That’s built on a dataset and paper I wrote called CommonForms, where I scraped CommonCrawl for hundreds of thousands of fillable form pages and used that as a training set:
https://arxiv.org/abs/2509.16506
Next step is training and releasing some DETRs, which I think will drive quality even higher. But the ultimate end goal is working on automatic form accessibility.
We were featured on our local NPR syndicate which is neat: https://laist.com/news/los-angeles-activities/new-grassroots...
https://kpbj.fm/
Since this is hackernews, i'll add that i'm building the website and archiving system using haskell and htmx, but what is currently live is a temp static html site. https://github.com/solomon-b/kpbj.fm
On the off chance you are throwing another event, I would love to help you raise much more than $800 one time (my site is https://withfriends.events/)
This might be a naive question which you've probably been asked plenty of times before so I'm sorry of I'm being tedious here.
Is it really worth the effort and expense to have a real radio station these days? Wouldn't an online stream be just as effective if it was promoted well locally?
A few years ago a friend who was very much involved in a local community group which I was also somewhat interested in asked me if I wanted to help build a low power FM station. He asked me because I know something about radio since I was into ham radio etc.
I was skeptical that it was worth the effort. The nerdy part of me would have enjoyed doing it but I couldn't help thinking that an online stream would probably reach as many people without the hassle and expensive of a transmitter, antenna etc.
I know it's a toss up. Every car has an FM radio. Not everyone is going to have a phone plugged in to Android Auto or Apple Car Play and have a good data plan and have a solid connection.
I also pointed out that the technical effort is probably the small part compared to producing interesting content.
1. Radio is COOL. As a fellow ham I think you would agree with me on this one so I'll leave it at that.
2. Internet streaming gives you wider but far less localized audience. We will have an internet stream, but being radio first shifts the focus to local community and local content.
3. Internet streaming and radio have related but not entirely overlapping histories and contexts which impacts how people produce and consume their content. I love the traditional formats of radio and they are often completely missing in online radio which IMO models itself more often on mixtape and club DJ culture.
4. AI slop is ruining the world. I have this belief that as AI slop further conquers the internet we are going to get to a place where nobody trusts internet content. People will seek out novelty and authenticity (sort of how LLMs do lol) and I think there will be a return to local content and community.
5. Commercial radio sucks. The LPFM system is a wonderful opportunity to create a strong, community driven alternative to corporate media.
Break down your software requirements (Userdoc guides you through the process), refine/confirm, setup your technical specs, coding/business guidelines & guardrails, and then create development plans (specs) which can be easily consumed by coding agents via MCP, or by platforms like Lovable / v0 using Markdown. Working on Cursor background agent integration atm.
https://userdoc.com
I'm working on a mini-project which monitors official resources on the web and sends email notifications on time. Currently covering around 15000 inhabitants.
https://skoljarev.com/bodulica/
In the AI macro food logging world, there's really only Cal AI which estimates macros based on an image. I use cronometer personally, and it's super annoying to have to type everything in manually, so it makes sense why folks reach for something like Cal AI. However, the problem with something like Cal AI is accuracy. It's at best a guess based on the image. Macros for humans tries to be more of a traditional weigh your food, log it, etc kind of app, while updating the main interface for how users input that info into something more friendly.
I set myself a hard deadline to present a live demo at a local showcase/pitch event thing at the end of the month. I bet the procrastination will kick in hard enough to get the backend hosted with a proper database and a bit more UI polish running on my phone. :-)
Here's a really early demo video I recorded a few weeks ago. I had just spoken the recipe on the left and when I stop recording you can see my backend streams the objects out as they're parsed from the LLM https://www.youtube.com/watch?v=K4wElkvJR7I
I have been working on it for the last two years as a side project, but starting March will be my full time job! Kind of excited and scared at the same time
Could you please provide a Docker image?
Many thanks!
How do you switch from open source (1) to full time paid job (2) ? I'm curious cause I'm still stuck in (1)
Thank for your feedback
Other than that, I've been doing a lot of fixing of tech debt in my home network from the last six years. I've admittedly kind of half-assed a lot of the work with my home router and my server and my NAS and I want these things to be done correctly. (In fairness to me, I didn't know what I was doing back when I started, and I'd like to think I know a fair bit better now).
For example, when I first built my server, I didn't know about ZFS datasets, so everything was on the main /tank mount. This works but there are advantages to having different settings for different parts of the RAID and as such I've been dividing stuff into datasets (which has the added advantage of "defragging" because this RAID has grown by several orders of magnitude and as a result some of the initial files were fragmented).
The idea is to eventually add more categories like “restaurants,” “theaters,” “roads,” etc., so you can play based on local themes.
I’d love to hear your thoughts - any feedback on what you’d like to see, what feels off, or any issues you run into would be super helpful.
All but 1 prompts were in a 3-block radius IN the city (again, about 20 minutes from my town's town hall).
So the 1 prompt I didn't know I guessed the same 3 block radius as the others, and it was about 2 miles away. Still in the city, not the town I typed in.
It seems like smaller towns will be gobbled up by famous cities elements. Especially here in New England where the majority of 'famous' local things are so few.
edit: also, changing the 'radius' resets the city to where the website THINKS I am instead of where I typed in.
The main feature: you can run multiple language servers simultaneously for the same buffer.
One of the main reasons people stick with lsp-mode over Eglot has been the lack of multi-server support. Eglot is otherwise the most "emacsy" LSP client, so I'm working on filling that gap and I hope it could be merged into Emacs one day.
This is still WIP but I've been using it for a while for Python (basedpyright or pyrefly + ruff for linting) and TypeScript (ts-ls + eslint + tailwind language server).
GitHub: https://github.com/pawelkobojek/penteglot
I started this out of frustration that there is no good tool I could use to share photos from my travel and of my kids with friends and family. I wanted to have a beautiful web gallery that works on all devices, where I can add rich descriptions and that I could share with a simple link.
Turned out more people wanted this (got 200+ GitHub stars for the V1) so I recently released the V2 and I'm working on it with another dev. Down the road we plan a SaaS offer for people that don't want to fiddle with the CLI and self-host the gallery.
I also tried the vertical masonry layout, which looks good, but makes no sense if your photos have a chronological order...
The magic happens here: https://github.com/SimplePhotoGallery/core/blob/a3564e30bcb6...
I stumbled across it looking for CSS flex masonry examples.
I've been working on the idea for about a year now. I have put up the funds and set up the corporation. Been busy designing the menu, scouting an ideal location and finding the right front-end staff.
Making it with the Rust game engine, Bevy and really enjoying it so far. Using Blender for making assets. I'm maybe a dumbass for making it as my first game, but I just don't really get excited by smaller projects.
Overall I've found modern games to be (1) overstimulating and (2) have algorithms in the background to keep me engaged that I don't trust (see: free to play model)
It makes tricky functions like torch.gather and torch.scatter more intuitive by showing element-level relationships between inputs and outputs.
For any function, you can click elements in the result to see where they came from, or elements in the inputs to see how they contribute to the result to see exactly how it contributes to the result. I found that visually tracing tensor operations clarifies indexing, slicing, and broadcasting in ways reading that the docs can't.
You can also jump straight to WhyTorch from the PyTorch docs pages by modifying the base URL directly.
I launched a week or two back and now have the top post of all time on r/pytorch, which has been pretty fun.
https://whytorch.org/torch.mul/
torch.matmul was one of the first functions I implemented on WhyTorch and it uses and highlights rows and columns as you would expect.
I’d love to hear any feedback or outcomes from your training session, please feel free to reach out - email in profile.
https://x.com/oranlooney/status/1977728062289555967
It'll work in sessions where first everyone can suggest games, then in the second phase veto out suggestions, then vote and it'll display the games with the highest vote. You can also manage/import a list of your games and it'll show who owns what. It's geared towards video games, but will work for board games too. Hope to release it for everyone in the next weeks.
It's largely finished and functional, and I'm now focused on polish and adding additional builtin functions to expand its capabilities. I've been integrating different geometry libraries and kernels as well as writing some of my own.
I've been stress-testing it by building out different scenes from movies or little pieces of buildings on Google Maps street view - finding the sharp edges and missing pieces in the tool.
My hope is for Geotoy to be a relatively easy-to-learn tool and I've invested significantly in good docs, tutorials, and other resources. Now my goal is to ensure it's something worth using for other people.
The platform also supports HR for the organization by presenting in-depth anonymized data surrounding team interactions, exceptional individuals, and potential bottlenecks within the organization caused by qualitative issues. Aiming to launch by end of year and working with small businesses as free test users for feedback and validation.
It’s one command that lets you boot Linux on other computers via LAN. Cross platform, rootless
I think I’ve figured out a way to make a pxehost app for mobile devices, so you can boot Linux installers with an app on your phone
- No sign-up, works entirely in-browser
- Live PDF preview + instant download
- VAT EU support
- Shareable invoice links
- Multi-language (10+) & multi-currency
- Multiple templates (incl. Stripe-style)
- Mobile-friendly
GitHub: https://github.com/VladSez/easy-invoice-pdf
Would love feedback, contributions, or ideas for other templates/features.
https://github.com/VladSez/easy-invoice-pdf/blob/main/LICENS...
I'm trying to get it polished up for an initial release, including some GitHub Actions config so people can easily run it in CI.
It can process a set of 3-hour audio files in ~20 mins.
I recorded a demo video of how it works here: https://www.youtube.com/watch?v=v0KZGyJARts&t=300s
[1] https://github.com/naveedn/audio-transcriber
I alluded to building this tool on a previous HN thread: https://news.ycombinator.com/item?id=45338694
Thanks for building this. I am trying to set it up but facing this issu
> `torch` (v2.3.1) only has wheels for the following platforms: `manylinux1_x86_64`, `manylinux2014_aarch64`, `macosx_11_0_arm64`, `win_amd64`
Not sure what the market is for something like this but it's something I've been thinking a lot about since stepping down as CEO of my previous company.
My goal is two-fold:
1. Help teams make better, faster decisions with all context populating a source-of-truth.
2. Help leaders stay eyes-on, and circumstantially hands-on, without slowing everything down. What I'd hope to be an effective version of "Founder Mode".
If anybody wants to play around with it, here's a link to my staging environment:
https://staging.orgtools.com/magic-share-link/5a917388cf19ed...
I've added it to SaaSHub saashub.com/orgtools. If you have an @orgtools.com email you can verify and improve the profile. Cheers!
> Less has been used to modify plural nouns since the days of King Alfred
https://www.merriam-webster.com/dictionary/less
More reading on Wikipedia: https://en.wikipedia.org/wiki/Fewer_versus_less
I originally had "less meetings" before an LLM corrected me into using "fewer meetings". Then when talking about Orgtools to a couple people I heard them say "less meetings" and switched back thinking that sounds slightly more natural (but incorrect).
Sign up for my waitlist (or DM me if you want to know more) here: https://www.getsnapneat.com
I'm curious what sets your app apart?
One thing that I miss in MacroFactor is that it should have some memory of my previous choice.
Example: If I take a picture of a glass of milk, it always assumes it to be whole milk (3.5% fat). Then I change it to a low fat milk (0.5% fat). But no matter how many times I do that, it keeps assuming that the milk in the photo is whole milk.
So I'm trying to define a multiplication operation using primitive roots.
[0] https://leetarxiv.substack.com/p/if-youre-smart-why-are-you-...
[1] (The other time the US gov put a backdoor in an elliptic curve) https://leetarxiv.substack.com/p/dual-ec-backdoor-coding-gui...
It’s going to feature a synchronous IPC model where the inter-task ‘call graph’ is known at compilation. Function call semantics to pass data between tasks. Call() recieve() reply()
A build tool that reads TOML will generate the kernel calls so that tasks can be totally isolated — all calls go though supervisor trap so we have true memory isolation.
Preemptions are possible but control is yielded only at IPC boundary so it’s not hard realtime.
So that makes things super robust and auditable behavior at compile time. Total isolation means tasks can crash catastrophically without affecting the rest of the system. Big downsides are huge increase in flash usage, constrained programming model, complex build system, task switching overhead. Just a very different model than what I’m used to at $dayjob.
I want to basically find out, hey what happens when we go full safety!? What’s hard about it? What tradeoffs do we need to make? And also kinda like what’s a different model for multitasking. Written in Rust of course.
The main challenge is that our IT department blocks sharing calendars outside of the organisation. While this is primarily a solution for my own problem and likely not valuable to others, you could probably achieve the same result with tools like n8n or IFTTT.
After acquiring a flight school, I quickly realized how challenging the day-to-day operations were. To solve the problems of aircraft fleet management, scheduling, and student course progress tracking, I developed a comprehensive platform that handles all aspects of running a flight school. Existing software is often outdated and expensive, offering poor value for its high cost. FlightWise was built off the real world experiences of my own school, where it has delivered immediate and invaluable benefits to our entire team, from students to administrative staff. We've just recently started to offer this platform publicly to other flight schools.
Currently my biggest focus is my MUD Server I'm working on. Allows a developer to create a simple MUD game, (locations, items, combat), but all NPCs are actually just LLM controlled MUD clients.
Uses Server-Sent Events for the client + HTTP post for sending actions. Not a traditional direct TELNET style MUD server, but works well in the modern world.
Definitely not 100% hand-coded, probably only around 30% at this point, as I've had my original code refactored and expanded many times by now. It's taught me a lot about managing the agent in agentic-coding.
Redesigning investment holdings for wider screens and leaning on hotwired turbo frames. Thankful for once-campfire as a reference for how to structure the backend. The lazy loading attribute works great with css media queries to display more on larger viewports.
Enjoying learning modern css in general. App uses tailwind, but did experiment with just css on the homepage. Letting the design emerge organically from using it daily, prototype with tailwind, then slim it back down with plain css.
Link: https://ohyahapp.com
Interesting challenge was designing for minimal distractions while keeping setup simple for parents. Timer-locked navigation so kids can see what's next but can't start other tasks or switch profiles. Also refactored from schedule-centric (nightmare to maintain) to task-definitions as first-class citizens, which made creating schedules way easier
React Native/Expo + Firebase. On the App Store after months of dogfooding with the family
http://github.com/patched-network/vue-skuilder, docs-in-progress at https://patched.network/skuilder
I am using this stack now to build an early literacy app targeting kids aged 3-5ish at https://letterspractice.com (also pre-release state, although the email waitlist works I think!). LLM assisted edtech has a lot of promise, but I'm pretty confident I can get the unit cost for teaching someone to read down to 5 USD or less.
It's like inventing the refrigerator and all the brochure talk about is the internal engineering of the machine, rather than how keeping food cold is useful from the economic and culinary perspectives.
My focus on that front is the LettersPractice app. I taught my own kids (6, 4) to read using early versions of the same software, and I'm pretty confident about the efficacy of the approach.
As far as the broader project moving toward being a consumer facing applications, there are a few options.
The existing platform-ui is a skeleton / concept sketch of one category. A web platform that allows users to create and subscribe to different courses, and then study sessions aggregate content from all subscribed courses. reddit for knowing stuff and having skills, rather than .
Another broad category is in NoCode ITSaaS (interactive tutoring system as a service?) platform. EG, a specialized bolt.new for EdTech that uses agentic workflows to create courses that cover a given domain or specific input documents (eg, textbooks, curriculum documents).
Very interested in this sort of stuff.
Should be working now.
Really appreciate the interest.
I've been working on my own arrangements, putting chords in lyrics, and the program produces a page with the chord diagrams next to each song. ChordPro is a program that descends from a long lineage of programs that do this, but it's been actively under development in the last 3-4 years. The developer is quite nice, and attends bug reports.
Most recipes are a failure for beginners on the first try. I aim to make recipes bulletproof so anyone can pick up any recipe and it will just work.
The goal is to make the best recipe app ever. On a technical level recipes are built as graphs and assembled on demand. This makes multilanguage support easy, any recipe can use any unit imaginable, blind people could have custom recipe settings for their needs, search becomes OP, and there is also a wikipedia like database with information that links to all recipes. Because of the graphs; nutritional information, environmental impact, cost etc. can simply be calculated accurately by following linked graphs. Most recipe apps are very targeted to specific geographical regions and languages, this graph system removes a lot of barriers between countries and will also be a blessing to expats. Imagine an American in Europe that wish to use imperial units, english recipes, but with ingredients native to their new homeland. No problem, just follow a different set of nodes and the recipe is created that way for them.
The website is slightly outdated but gives a good idea of what is coming. Current goal is to do beta launch in 2026.
From the marketing side...
I'd make a selection on the website on first visit - I'm a chef / creator - I like to cook
Your cta (call to action) is... Not very effective
Instagram only has 7 followers and no posts. ...
I like the dedication but I'd definitely recommend to improve your marketing / promotion skills (if you build it they will come is a myth unfortunately...), if you wanna have a call about it feel free to hit me up, tijlatduckdotcom. I'm also in Europe so easy for timing.
I've created two open-source solutions, one which uses a VM (https://github.com/webcoyote/clodpod) and another which creates a limited-user account with access to a shared directory (https://github.com/webcoyote/sandvault).
Along the way I rolled my own git-multi-hook solution (https://github.com/webcoyote/git-multi-hook) to use git hooks for shellcheck-ing, ending files with blank lines, and avoid committing things that shouldn't be in source control.
So, I built it.
Using ChatGPT's voice agents to generate Github issues tagging @claude to trigger Claude Code's Github Action, I created https://voicescri.pt that allows me to have discussions with the voice agent, having it create issues, pull requests, and logical diffs of the code generated all via voice, hands free, with my phone in my pocket.
Are you reviewing code by voice, like a blind programmer? Have you tried Emacspeak? I know that's not normally hands-free.
https://github.com/tomaytotomato/location4j
I think I am going to re-write the logic to calculate a score on all matches it makes from a given piece of text.
e.g.
"us ca" ---> is this "USA California" or "USA and Canada (CA ISO2 code)"?
"san jose usa" ---> is this "San Jose California, USA" or another San Jose in America
Made primarily for my friend's coffee shop. Data is stored locally, and the app is fully functional when offline. There is an optional "syncing" feature to sync your data with multiple devices which requires a sign up. This is a Progressive Web App built with Web Components. The syncing is made possible with PouchDB/CouchDB.
I still have to write (or screen record) a Getting Started guide but the app is ready for use nonetheless.
The main idea is to gather tech articles in one place and process them with a LLM — categorize them, generate summaries, and try experimental features like annotations, questions, etc.
I hope this service might be useful to others as well. You can sign up with github account to submit your articles as well.
https://stravatocalendar.com/
It's working well and I think I can use the same "backend" to pull this data into a spreadsheet which could be useful for data hungry users/coaches/club and event organizers/etc.
- 3D visualization of sea surface temps over time, very much a work in progress: https://globe-viz.oberbrunner.com
- Also a Deep Time log-scaled timeline of the history of the universe at https://deep-timeline.org
The insight: your architecture diagram shouldn't be a stale PNG in Confluence. It should be your war room during incidents.
Going to be available as both web app and native desktop.
Very keen for feedback so if any of that sounds interesting, feel free to give it a go!
https://github.com/amterp/rad
I'm trying to gather sources and read scientific papers to make a course on that topic, in France.
https://theretowhere.com
It currently supports complex heatmaps based on travel time (e.g. close to work + close to friends + far from police precincts), and has a browser extension to display your heatmap over popular listing sites like Zillow.
I'm thinking of making it into an API to allow websites to integrate with it directly.
Taking a break from tech to work on a luxury fashion brand with my mum. She hand paints all the designs. I it first collection is a set of silk scarves and we’re moving into skirts and jackets soon.
Been a wonderful journey to connect with my mum in this way. And also to make something physical that I can actually touch. Tech seems so…ephemeral at times
Some earnest and unsolicited feedback on the website: the scroll-based transition is not really working well, looks very jumpy in Safari/MacOS, maybe interpolating between states will help smooth it out. Design-wise, the blur effect is quite jarring, and the product list screams Shopify store and not luxury brand. You already have pretty good photography, I'd feature the portraits heavily instead of the flat product shot. Invest in great typography.
I was incidentally browsing for a new wallet recently and think this might be good inspiration: https://secrid.com/en-nl/collections/carry-with-confidence/. Wish you and your mother success!
It's an API that allows zero-knowledge proofs to be generated in a streaming fashion, meaning ZKPs that use way less RAM than normal.
The goal is to let people create ZKPs of any size on any device. ZKPs are very cool but have struggled to gain adoption due to the memory requirements. You usually need to pay for specialized hardware or massive server costs. Hoping to help fix the problem for devs
It's meant to be a 'rails-like' experience in Go without too much magic and conventions.
Basically, speeding up development of fullstack apps in Go using templ, datastar, sqlc with an MVC architecture and some basic generators to quickly setup models, views and controllers.
The goal is to make it straightforward to design and deploy small, composable audio graphs that fit on MCUs and similar hardware. The project is in its infancy, so there’s plenty of room for experimentation and contributions.
https://github.com/Colahall/SPARK
Are you thinking about supporting deployment on FPGAs like the iCE40 line?
- Wallpaper manager with multi-monitor support and multiple image sources. Change wallpapers daily, hourly, etc.
- Lockscreen image manager with the same modes as the wallpaper feature.
- Screensaver/fullscreen modes manager with many screensaver options and multi-monitor support.
- Custom shortcut menu builder where you can have a custom menu accessible from your tray area.
[0] - https://lumotray.com
It is a tool that lets you create whiteboard explainers.
You can prompt it with an idea or upload a document and it will create a video with illustrations and voiceover. All the design and animations are done by using AI apis, you dont need any design skills.
Here is a video explainer of the popular "Attention is all you need" paper.
https://www.youtube.com/watch?v=7x_jIK3kqfA
Would love to hear some feedback
The animations / drawings themselves are solid too. I think there's more to play with wrt the dimensions and space of the background. It would be nice to see it zoom in and out for example.
how does it work with long papers? will it ever work with small books?
will try it out tomorrow again
yes it should work.
> i can’t upload the document
Could you please drop an email to rahul at magnetron dot ai with the document. I will set things up for you
Recent focus has been on geolocation accuracy, and in particular being able to share more data about why we say a resource is in a certain place.
Lots of folks seem to be interested in this data, and there's very little out there. Most other industry players don't talk about their methodology, and those that do aren't overly honest about how X or Y strategy actually leads to a given prediction, or the realistic scale or inaccuracies of a given strategy, and so on. So this is an area I'm very interested in at the moment and I'm confident we can do better in. And it's overall a fascinating data challenge!
The rough overview is on my X post here: https://x.com/BobAdamsEE/status/1965573686884434278
It's a long running process, and the HW is mostly defined (but not laid out) but on pause while I work on porting TockOS to an ATSAMV71 to make sure I won't run into any project ending issues with the SW before I build the hardware.
The stoneware bitrot was legacy but eventually overwhelmed the architecture during an off-peak environment incident.
I'm tasked with fulfilling runtime dependencies to restore the wall framework, but had issues with build time mixing parameters not compiling well with the piecemeal building blocks.
I finally got it up and running through trial and error, though I sense a full rewrite will eventually be needed in the future.
So I started https://github.com/vicentereig/dspy.rb: a composable, type-safe version built for Rubyists who want to design and optimize prompts, and reuse LLM pipelines without leaving their language of choice. Working with DSPy::Signatures reminds me a bit of designing a db schema with an ORM.
It’s still early, but it already lets you define structured modules, instrument them in Langfuse, wire them up like functional components, and experiment with signature optimization. All in plain Ruby.
I'm still rebuilding OnlineOrNot's frontend to be powered by the public REST API. Uptime checks are now fully powered by a public API (still have heartbeat checks, maintenance windows, and status pages to go).
Doing this both as a means of dogfooding, and adding features to the REST API that I easily dumped into the private GraphQL API without thinking too hard. That, and after I finish the first milestone (uptime checks + heartbeat/cron job monitors), I'll be able to start building a proper terraform provider, and audit logs.
Basically at the start of the year I realised GraphQL has taken me as far as it can, and I should've gone with REST to start with.
https://glouw.com/2025/10/12/Ensim4.html
What a neat tool!
I'm calling it a "Micro Functions as a Service" platform.
What it really is, is hosted Lua scripts that run in response to incoming HTTP requests to static URLs.
It's basically my version of the old https://webscript.io/ (that site is mostly the same as it was as long as you ignore the added SEO spam on the homepage). I used to subscribe to webscript and I'd been constantly missing it since it went away years ago, so I made my own.
I mostly just made this for myself, but since I'd put so much effort into it, I figure I'm going to try to put it out there and see if anyone wants to pay me to use it. Turns out there's a _lot_ of work that goes into abuse prevention when you're code from literally anyone on the internet, so it's not ready to actually take signups yet. But, there is a demo on the homepage.
Recently I've managed to port the game onto a real-world cyberdeck, the uConsole. [1]
[0] https://store.steampowered.com/app/3627290/Botnet_of_Ares/
[1] https://tiniuc.com/hacksim-on-cyberdeck/
- A front-end library that generates 10kb single-html-file artifacts using a Reagent-like API and a ClojureScript-like language. https://github.com/chr15m/eucalypt
- Beat Maker, an online drum machine. I'm adding sample uploads now with a content accessible storage API on the server. https://dopeloop.ai/beat-maker
- Tinkering with Nostr as a decentralized backend for simple web apps.
In short, an explorable database of movies, TV shows, books and board games organised around the time and place that they're set. So if you're interested in stuff set during the French Revolution but not in Paris, you could find it there, for instance.
Currently we have two tools that are already being used by different companies.
The first is the Flowmono E-Sign tool; you can sign and send documents securely from anywhere, without printing or scanning, and it is relatively cheaper than any other E-sign platform.
And with Flowmono Workflow Automate, you can connect your tools and set up smart workflows that handle repetitive tasks for you, saving time and keeping your processes running smoothly.
You can check both apps here and let me know what you think: https://www.flowmono.com/en-US/
https://maudit.org https://github.com/bruits/maudit
https://gem-words.com/
I only recently reached an alpha - and I am looking for testers!
- Alpha screenshot: https://drive.google.com/file/d/1Wi6MqxC17iIzfSL--_nNxHxbID1...
- Rambling why I built it, plus Discord link: https://progress.compose.sh/about
Even though I am not your target audience (linux i3 user myself), I would be interested in knowing how much "hacking" the macOS system is required to do this. Is it hard to get a list of running apps for your Task Bar? Is it hard to list the apps for the menu? How about keeping it all "on top" while other windows e.g. get maximized/minimized/full-screen, etc?
You actually nailed the major pain points. Particularly window focus and state management. I've spent months solving this problem alone.
-
1. Applications data list: Getting the list is easy! Finding out which apps in that list are "real" apps isn't. Getting icons isn't. Reliably getting information on app state isn't. Finding out why something doesn't work right is as painful as can be. Doing all this in a performant way is a nightmare.
2. Applications menu renderer: Rendering the list for the menu is easy enough: the macOS app sends this data via socket. The frontend is just web sockets and web components under the hood (https://lit.dev). The difficult part was converting app icons to PNG, which is awfully slow. So a cache-warmup stage on startup finds all apps, converts their icons to png, and caches them to the app directory for read.
3. Window state: again, by far the worst and it isn't even close. Bugs galore. The biggest issue was overriding macOS core behavior on what a window is, when it's focused, and how to communicate its events reliably to the app. Although I did include a couple private APIs to achieve this, you can get pretty far by overriding Window class types in ways that I don't think were intended (lol). There is trickery required for the app to behave correctly: and the app is deceptively simple at a glance.
-
One bug, and realization, that still makes me chuckle today.. anything can be a window in macOS.
I'm writing this on Firefox now, and if I hover over a tab and a tooltip pops up - that's a window. So a fair amount of time has gone into determining _what_ these apps are doing and why. Then coming up with rules on determining when a window is likely to be a "real" window or not.
The Accessibility Inspector app comes standard on macOS and was helpful for debugging this, but it was a pain regardless.
Recently I started executing the upstream spec tests against it, as a means to increase spec conformance. It's non-streaming, which is a non-starter for many use cases, but I'm hoping to provide a streaming API later down the road. Also, the errors interface are still very much WIP.
All that said, it's getting close to a fully-conformant one and it's been a really fun project.
https://github.com/agis/wadec
P.S. I'm new to the language so any feedback is more than welcome.
https://github.com/aabiji/logbuddy
- carcassonne game agent
Everything is still on private repos because it is too nasty, and Im shy
https://github.com/zserge/grayskull
https://trendyzip.com/access-code/hn4freeoct13
We’re working directly with partner housing unions and charities in Britain and Ireland to build the first central database of rogue landlords and estate agents. Users can search an address and see if it’s marked as rogue/dangerous by the local union, as well as whether you can expect to see your deposit returned, maintenance, communication - etc.
After renting for close to a decade, it’s the same old problems with no accountability. We wanted to change this, and empower tenants to share their experiences freely and easily with one another.
We’re launching in November, and I’m very excited to announce our partner organisations! We know this relies on a network effect to work, and we’re hoping to run it as a social venture. I welcome any feedback.
The current challenge is the display. I’ve struggled to learn about this part more than any other. After studying DVI and LVDS, and after trying to figure out what MIPI/DSI is all about, I think parallel RGB is the path forward, so I’ve just designed a test PCB for that, and ordered it from JLCPCB’s PCBA service.
It's been a great project to understand how design depends on a consistent narrative and purpose. At first I put together elements I thought looked good but nothing seemed to "work" and it's only when I took a step back and considered what the purpose and philosophy of the design was that it started to feel cohesive and intentional.
I'll never be a designer but I often do side projects outside my wheelhouse so I can build empathy for my teammates and better speak their language.
I started my program in Swift and SwiftUI, although for various reasons I'm starting to look at Dart and Flutter (in part because being multiplatform would be beneficial, and in part because I am getting the distinct feeling this program is more ambitious than where SwiftUI is at currently). It isn't a direct port of Dramatica by any stretch, instead drawing on what I've learned writing my own novels, getting taught by master fiction writers, and being part of writing workshops. But no other program that I've seen uses Dramatica's neatest concepts, other than Subtxt, a web-based, AI-focused app which has recently been anointed Dramatica's official successor. (It's a neat concept, but it's very expensive compared to the original Dramatica or any other extant "fiction plotting" program. Also, there's a space for non-AI software here, I suspect: there are a lot of creatives who are adamantly opposed to it in any form whatsoever.)
I really really want something like this, that I can run locally without paying 100$ a year for Arc Studio.
It's a real life treasure hunt in the Blue Ridge Mountains with a current total prize of $31,200+ in gold coins and a growing side pot.
I modeled it off of last year's Project Skydrop (https://projectskydrop.com) which was in the Boston area.
* Shrinking search area (today, Day 5, it will be 160 miles, on Day 21 it'll be just 1 foot wide)
* 24/7 webcam trained on the jar of gold coins sitting on the forest floor just off a public hiking trail
* Premium upgrades ($10 from each upgrade goes towards the side pot) for aerial photos above the treasure and access to a private online community (and you get your daily clues earlier)
* $2 from each upgrade goes towards the goal of raising $20k for continued Hurricane Helene relief
So far the side pot is $6k and climbing.
It's been such a fun project to work on, but also a lot of work. Tons of moving parts and checking twice and three times to make sure you've scrubbed all the EXIF data, etc.
did you do any math around predicting if 'donating' this gold to a treasure hunter would yield an even greater amount to hurricane relief?
I’ve spent a while understanding what sort of market would make it viable. I think it does actually work if you can square: 10K participants per major metro area, revenue of about 2.9M per metro area (so say, 5K monthly recurring with about 50 customers).
At that point you could pay data union participants about $5 a week to share their location data with you.
From talking to some previous data union folks, the major challenges are paying out (my target is much higher than any union managed), and people dropping out over time.
My bet is that these are both solvable things by selling data products rather than just bundles of data, and the data source being very passive.
I’m also interested in the idea that such a union should act more like a union than previous efforts in this space, by actively defending members’ data from brokers.
1. I shared the app with the small audience I have and received some feedback in very unexpected places. First, it was hard to understand how lists work because putting things into lists was an unobvious process. I fixed that by adding DnD that works well both with mouse and touch (turned out it’s two separate APIs). Second, users thought that the screenshot on the quite minimal landing page was the real app, and they clicked on it. The problem was so frequent and surprising that I decided to add something funny for people who do that, as I’m not willing to contribute a lot of time to the landing right now.
2. I underestimated how bad discoverability on the internet is. My expectation was that I would make my site fully server-side rendered, add a basic sitemap to Search Console, and have a few dozen organic users during the pre-holiday season when users are filling their wishlists. In reality, I got zero — not even users, but even visits. So I started actually working on SEO, no black magic but just adding slightly more complex sitemaps, micro-markup, and other stuff which I thought only products competing for the first page would need.
My next steps are to work on getting some minimal organic inflow of users and improving stuff related to auth and user management, which is the most time-consuming part of the work right now.
https://dotsjournal.app
It’s an iOS app to help tracking events and stats about my day as simple dots. How many cups of coffee? Did I take my supplements? How did I sleep? Did I have a migraine? Think of it like a digital bullet journal.
Then visualizing all those dots together helps me see patterns and correlations. It’s helped me cut down my occurrence of migraines significantly. I’m still just in the public beta phase but looking forward to a full release fairly soon.
Would love to hear more feedback on how to improve the app!
[0] https://github.com/stryan/materia and/or https://primamateria.systems/
I think app icons are an underrated artistic format, but they’ve only been used for product logos. I made 001 to explore the idea of turning them into an open-ended creative canvas. There are 99 “exhibit spaces” in the gallery, and artists can claim an exhibit to install art within. Visitors purchase limited-edition copies of pieces to display as the app’s icon, the art’s native format.
It’s a real-money marketplace too - the app makes money by taking commission of sales (Not crypto). I like economic simulation games and I think the constraints here could be interesting.
I’m currently looking for artists to exhibit in the gallery, if anyone is interested, or knows someone who may be, please let me know!
https://apps.apple.com/us/app/teletable-football-teletext/id...
Drones are real bastards - there's a lot of startups working on anti drone systems and interceptors, but most of them are using synthetic data. The data I'm collecting is designed to augment the synthetic data, so anti drone systems are closer to field testing
My perfect user being someone who is either a body builder, powerlifter, or someone who just takes weightlifting seriously.
I've also been obsessed with making it iOS native and a one-time purchase.
Been trying to build in public on Bluesky: @tobu.bsky.social
Simple landing page with a waitlist: https://plates.framer.website/
That's the philosophy behind it https://medium.com/@chrisveleris/designing-a-life-management...
Very easy install, check it out!
He liked what I built for him and I got jealous, so I expanded it with my own profile (Trail running).
Then, I got curious… Could I build a full web platform for people to track their sporting life? I mean we have LinkedIn and CVs for our job career, why not celebrate all our sports/training efforts as well.
After a couple of months on the side, I'm pretty happy with Flexbase. If you're into sports, give it a try and let me know what's missing for you.
Note: it's mobile-only past the front page.
https://flexbase.co/ My profile: https://flexbase.co/athletes/96735493
You can list the sports you're doing or did in your entire life, you can add your PRs, training routines, gear, competition results, photos. You can also list your clubs, and invite/follow your training buddies.
Honestly, I'm not sure where (or if) to expand it... Turn it into a Club-centric tool, make it more into a social network for sporty people.
Lots of ideas, but I'd love to find someone to work on it with me. I find that building alone is less fun.
Thanks for your sporty feedback.
My current prototype scans potential lookalikes for a target domain and then tracks DNS footprint over time. It's early, but functional - and makes it easier to understand if some lookalike domain is looking more "threat-y".
I've also been working on automating the processing of a parent-survey response for my kid's school using LLMs. The goal is to produce consistent summarization and statistics across multiple years and provide families with a clearer voice and helping staff and leadership at the school best understand what things have been working well (and where the school could improve).
There are several really good products in this space FYI, but a new angle I'm sure can be competitive.
Being a Ruby on Rails consultant, I frequently see active storage transformation becoming a bottleneck for web servers by eating up resources and making them sweat.
I built Fileboost to solve this problem for my customers. I'd love any feedback.
Right now I am getting my first users and already getting great feedback. Many things on the roadmap.
Always eager to learn more about others pain points when it comes to React Native/mobile development. Let me know what you think!
We have a fun group working on it on Discord (find the discord invite in the How To)
https://guessix.com/
An open source powerful network reconnaissance and asset discovery tool built with Go and React
https://codeberg.org/Timwi/JigGen
The newest addition is a hexagonal piece cut, bringing the number of built-in geometries to 5.
Create REST APIs for PostgreSQL databases in minutes.
https://npgsqlrest.github.io/
- one man project (me) - been doing it well over a year now - no sponsorship, no investors, no backers, no nothing just my passion - I haven't even advertised much, this may first ir second time I'm sharing a link - On a weekdays im building a serious stuff with it - On weekends preparing a new major version with lessons learned from doing a real project with it
Not going to stop. But I migh be seeking sponsors in future, not sure how that will turn out. If not that's ok, I'm cool to be only user.
There are few similar projects too, one is itself a startup which sadly on the verge of bankruptcy, and another aggregates only IT-related jobs.
I’ve been working for the past 3 years on SelfHostBlocks https://github.com/ibizaman/selfhostblocks, making self-hosting a viable and convenient alternative to the cloud for non technical people.
It is based on NixOS and provides a hand-picked groupware stack: user-facing there is Vaultwarden and Nextcloud (and a bunch more but those 2 are the most important IMO for non technical people as it covers most of one’s important data) and on the backend Authelia, LLDAP, Nginx, PostgreSQL, Prometheus, Grafana and some more. My know-how is in how to configure all this so they play nice together and to have backups, SSO, LDAP, reverse proxy, etc. integration. I’m using it daily as the house server, I’m my first customer after all. And beginning of 2025 it passed my own internal checkpoint to be shared with others and there’s a handful of technical users using it.
My goal is to work on this full time. I started a company to provide a white glove installation, configuration and maintenance of a server with SelfHostBlocks. Everything I’ll be doing will always be open source, same as the whole stack and the server is DIY and repair friendly. The continuous maintenance is provided with a subscription which includes customer support and training on the software stack as needed.
Financial institutions and governments don’t spot crime because of incomplete information at individual firms. We help them understand federated learning and how to effectively collaborate and not just talk about it. All code is open source, so you can always help out ;-)
Some industry players are coming around: https://www.swift.com/news-events/press-releases/swift-ai-in...
The challenge is how ChatGPT can understand your "query" or say "prompts"? Raw data is not good enough - so I try to use a term called "AI Understanding Score" to measure it: https://senify.ai/ai-understanding-score. I think this index will help user to build more context so that AI can know more and answer with correct result.
This is very early work without every detail considered, really would like to have your feedback and suggestions.
You can have a try with some MCP services here: https://senify.ai/mcp-services
Thanks.
I'm a robotics engineer by training, this is my first public launch of a web app.
Try it: https://app.veila.ai (free tier, no email required)
Homepage: https://veila.aiHappy to answer any questions.
In this space, it is more about trust and what you have done in the past more than anything else. Audits and whatnot are nice, but I need to be able to trust that your decisions will be sound. Think how Steam's Gabe gained his reputation. Not exactly easy feat these days.
FWIW, favorited for testing.
I'd love to hear your feedback if you get around to test Veila, e.g. on hey@veila.ai.
Not sure if there's more to say about it right now except that fuzz tests are good for this sort of low level programming with disk layouts involved. They drive up test execution time, but it's still almost hard to build them too early or have too many of them, as there's almost always an unimaginable number of permutations of weird corner cases that are hard to get at with regards to block boundaries and so on that are hard to identify based on staring at the code and doing classic unit tests.
Right now it connects to local and remote databases like SQLite and Postgres, lets you browse schemas and tables instantly, edit data inline, and create or modify tables visually. You can save and run queries, generate SQL using AI, and import or export data as CSV or JSON. There’s also a fully offline local mode that works great for prototyping and development.
One of the more unique aspects is that DB Pro lets you download and run a local LLM for AI-assisted querying, so nothing ever leaves your machine. You can also plug in your own cloud API key if you prefer. The idea is to make AI genuinely useful in a database context — helping you explore data and write queries safely, not replacing you.
The next big feature is a Visual Query Builder with JOIN support that keeps the Visual, SQL, and AI modes in sync. After that, I’m working on dashboards, workflow automation, and team collaboration — things like running scripts when data changes or sharing queries across a workspace.
The goal is to make DB Pro the most intuitive way to explore, query, and manage data — without the usual enterprise clutter. It’s still early, but it’s already feeling like the tool I always wanted to exist.
You can see it here: https://dbpro.app
Would love to hear feedback, especially from people who spend a lot of time in database clients — what’s still missing or frustrating in the current landscape?
Lately, I've been hacking on improving its linear algebra support (as that's one of the key focuses I want - native matrix/vector types and easy math with them), which has also helped flush out a bunch of codegen bugs. When that gets tedious, I've also been working on general syntax ergonomics and fixing correctness bugs, with a view to self-hosting in the future.
This weekend I’m working on making the parsing more robust. The most common friction I’ve heard is that downloading books elsewhere and importing them into the app is distracting. I’m torn between expanding it to include a peer-to-peer book exchange or turning it into an RSS feed reader.
After using evil-mode and meow, this is a system I've come up with that addresses issues I ran into with both.
https://codeberg.org/ideasman42/emacs-meep
I'm putting a bunch of security tools / data feeds together as a service. The goal is to help teams and individuals run scans/analysis/security project management for "freemium" (certain number of scans/projects for free each month, haven't locked in on how it'll pan out fully $$ wise).
I want to help lower the technical hurdles to running and maintaining security tools for teams and individuals. There are a ton of great open source tools out there, most people either don't know or don't have the time to do a technical deep dive into each. So I'm adding utilities and tools by the day to the platform.
Likewise, there's a built in expert platform for you to get help on your security problems built into the system. (Currently an expert team consisting of [me]). Longer term, I'm working on some AI plugins to help alert on CVEs custom to you, generate automated scans, and some other fun stuff.
https://meldsecurity.com/ycombinator (if you're interested in free credits)
Some are small tech jokes, while others were born from curiosity to see how LLMs would behave in specific scenarios and interactions.
I also tried to use this collection of experiments as a way to land a new job, but I'm starting to realize it might not be serious enough :)
Happy to hear what you think!
https://llmparty.pixeletes.com
* LLMs are accessible where telegram is accessible
* Multitude of models to choose from (chatgpt, claude, gemini) and more is coming.
* Full control over the bot behaviour is in user's hands: I don't add any system messages or temperature/top_p. I give UI for full control over system messages, temperature, top_p, thinking, web searching/scrapping and more to come.
* Q/A like context handling. Context is not carried through the whole bot, it's rather carried through chaing of replies. Naturally could be branched or use various models cross messages.
--
This is my hobby project and one of main tools for working with LLMs, thus I'm going to stick to it for quite a while.
Live demo: https://play.tirreno.com/login (admin/tirreno)
Github: https://github.com/tirrenotechnologies/tirreno
https://storytveller.com
I always have stories in mind but don't have time to write them all out, this allows me to just enter the idea and then the story comes out.
Came from my frustration with Google Maps in Germany constantly having take-down requests for bad reviews and ratings. To get around this, we only list places we recommend.
Still a work in progress, but expecting to release by end of year. Built on Rust + Tauri, in case anyone is curious.
I've created various open-source and commercial tools in the multimedia space over the last 10+ years and wanted to put it all together into something more premium.
The first is a DNS blocker called Quietnet - https://quietnet.app. Its born out of my interest in infrastructure and I wanted to build an opininated DNS blocker that helps mom and pops be safer on the Internet. At the end of the day its just the typical Pi-hole on the Cloud but with my personal interest in providing stronger privacy for our users while keeping their families safe.
The second, is a small newsletter aggregator tool called Newsletters.love - https://newsletters.love/.
I wanted to create a way for people to start curating their own list of newsletters and then sharing them with their friends and families. The service helps to generate a private email adddress that they can use to subscribe to newsletters and then start reading those newsletters whenever they want without it getting lost in their email inbox.
The idea is to enable a comment section on any webpage, right as you’re browsing. Viewing a Zillow listing? See what people are excited about with the property. Wonder what people think about a tourist attraction? It’ll be right there. Want to leave your referral or promo code on a checkout page for others? Post it.
Not sure what the business model will look like just yet. Just the kind of thing I wish existed compared to needing to venture out to a third party (traditional social media / forums etc) to see others’ thoughts on something I’m viewing online. I welcome any feedback!
Keep in mind I’d only be storing comments and the references to where they’re posted. I don’t need to know the webpages ahead of time at all.
Last month:
• wrote my first NEON SIMD code
• implemented adaptive quadrature with Newton–Cotes formulas
• wrote a tiny Markov-chain text generator
• prototyped an interactive pipeline system for non-normalized relational data in Lua by abusing operator overloading
• load-tested and taste-tested primary batteries at loads exceeding those in the datasheet; numerically simulated a programmable load circuit for automating the load testing
• measured the frequency of subroutine calls and leaf subroutine calls in several programs with Valgrind
• wrote a completely unhealthy quantity of commentary on HN
New ideas I'm thinking about include backward-compatible representations of soft newlines in plain ASCII text, multitouch calculators supporting programming by demonstration, virtual machines for perfectly reproducible computations, TCES energy storage for household applications beyond climate control such as cooking and laundry, canceling the harmonic poles of recursive comb filters with zeroes in the nonrecursive combs of a Hogenauer filter, differential planetary transmissions for compact extreme reductions similar to a cycloidal drive, rapid ECM punching in aluminum foil, air levigation of grog, ultra-cheap passive solar thermal collectors, etc. Happy to go into more detail if any of these sound interesting.
Working on faceted search for logs and CLI client now and trying to share my progress on X.
The main pitch is you have minimal dependencies and overheads and can run tests natively on pandas/polars/pyspark/dask/duckdb/etc (thanks to the awesome Narwhals project)
It's mostly there for v1 right now, but kean to add a tiny bit more functionality, and well a lot more docs. Working on something that's automated alongside the test suite, which should keep things reliable and fresh (I'll find out soon enough)
[0] https://github.com/benrutter/wimsey / https://codeberg.org/benrutter/wimsey
Basically the title explains it, I challenged myself to making a chrome extension a day for a month. I've been posting my progress on reddit, and my first two extensions have just been accepted to the chrome store (I'm only done day 3 so far, those were quick reviews!). For those interested:
Day 1: Minimal Twitter
Day 2: No Google AI Overview in Google Search
Day 3: No Images Reddit (Not Published, yet!)
I'm posting daily, I would love to hear thoughts on the extensions!!
https://x.com/uithoughts
So I will rest for a few days :D
Recording video lessons is a lot of work, often a few hours for a 10min lesson
And then after recording the lesson it’s hard to keep it up to date and often just easier to re-record the whole video
So i’m bringing together slides, screen recorder + camera recorder and timeline editor into a unified workflow
I noticed a gap - our customers are required to upload sensitive documents but often hesitate at the thought of uploading documents in the intercom/crisp interface, citing privacy concerns.
I thought, how difficult would it be to build an app that sends documents to your own Google drive - turns out it’s very easy. In a week, we built an app that renders an iframe in the intercom chat interface and sends documents straight to our google drive folder, bypassing intercom all together.
We’re now investigating uploading to s3 or azure blob storage and generating summaries of documents that are sent to the intercom conversation thread so ops teams can triage quicker.
Let me know what you think!
https://www.fibrehq.com/
Self-hosted compute can also be linked to Daestro to run jobs on.
[0]: https://daestro.com
It's basically a reverse-proxy-as-a-service. I handle TLS termination and cert management, offer routing rules, rate limiting, WAF + DDOS protection, proxy + web analytics, redirects etc. All accessible via very simple API.
Underneath it's Caddy hosted on AWS for proxy fleets, and Heroku for Web + API fleets.
Any feedback is welcome!
The big thing I wanted to try is automatic global routing via MQTT.
Everything is globally routable. You can roam around between gateway nodes, as long as all the gateways are on the same MQTT server.
And there's a JavaScript implementation that connects directly to MQTT. So you can make a sensor, go to the web app, type the sensor's channel key, and see the data, without needing to create any accounts or activate or provision anything.
https://github.com/EternityForest/LazyMesh#
https://pagecord.com
It’s minimal in design, but packed with features.
The USP that customers seem to really value is posting by email. It massively reduces the friction required to blog and is surprisingly enjoyable.
Launching next week is custom home pages with dynamic variables. It’s in beta already, see https://iamgregb.io.
Pagecord is free and source available with an unbeatably priced premium plan of $29/year.
Follow along on GitHub: https://github.com/lylo/pagecord
Feedback welcome! :)
Trying to fix this problem with Eternal Vault.
Link: https://eternalvault.app
https://imgur.com/a/9kWMXVe
Next up: an MCP server so devs can pull data from SecurityBot's various monitors directly into their IDE.
Besides the LLM experimentation, this project has allowed me to dive into interesting new tech stacks. I'm working in Hono on Bun, writing server-side components in JSX and then updating the UI via htmx. I'm really happy with how it's coming together so far!
If you zoom out it's meant to look something like a thermal vent with cellular life. Rank and karma cause the cells to bio-illuminate. Each cell is a submission, each organelle is a comment thread, and every shape represents a live connection to the Firebase HN API. It also has features to search, filter, and go back in time as far as the backend has been running.
It's been a passion project of mine. My little Temple OS. And I'll keep adding little features that please me.
https://hackernews.life/?s=top&id=45561428&c=0&t=1760303616
You can press the fast forward button or drag the slider to the right watch it evolve.
I was motivated to build this as I found that many great personal finance and budget apps didn't offer integrations with the banks I used, which is understandable given the complexity and costs involved, so I wanted to tackle this problem and help build the missing open banking layer for personal finance apps, with very low costs (a few dollars a month) and a very simple api, or built-in integrations.
Still working on making this sustainable, but been quite a learning experience so far, and quite excited to see it already making a difference for so many people :)
For work, https://heyoncall.com/ as the best tool for on-call alerting, website monitoring, cron job monitoring, especially for small teams and solo founders.
I guess they both fall under the category of "how do you build reliable systems out of unreliable distributed components" :)
That’s why I’ve been building 'Fragno', a framework for creating full-stack libraries. It allows library authors to define backend routes and provides reactive primitives for building frontend logic around those routes. All of this integrates seamlessly into the user’s application.
With this approach, providers like Stripe can greatly improve the developer experience and integration speed for their users.
https://fragno.dev
- Getting into RTL SDR, ordered a dongle, should be fun, want to build a grid people can plug into
- Bringing live transcripts, search and AI to wisprnote
- Moving BrowserBox to a binary release distribution channel for IP enforcement and ease of installation. Public repo will no longer be updated except for docs/version/base install script, and all dev happens in internal with binaries released to https://github.com/BrowserBox/BrowserBox. Too many "companies" (even "legit", large ones) abusing ancient forks and stealing our commercial updates without license, or violating previous permissive's conditions like AGPL source provision. Business lesson is even commercial licensed source-available eats into sales pipeline due to violators who could pay but assume false impunity and steal "freebies" "because they can." No perfect protection, but from now enforcement will ramp up, and source access is only for minimum ACV customers as add-on. So many enhancements coming down the pipe so it's gonna be many improved versions from here
- Creating an improved keyboard for iOS swipe typing, I don't like the settings or word choices in ambiguity and think it can be better
The Pain Point: If you are analyzing a large YouTube channel (e.g., for language study, competitive analysis, or data modeling), you often need the subtitle files for 50, 100, or more videos. The current process is agonizing: copy-paste URL, click, download, repeat dozens of times. It's a massive time sink.
My Solution: YTVidHub is designed around bulk processing. The core feature is a clean interface where you can paste dozens of YouTube URLs at once, and the system intelligently extracts all available subtitles (including auto-generated ones) and packages them into a single, organized ZIP file for one-click download.
Target Users: Academic researchers needing data sets, content creators doing competitive keyword analysis, and language learners building large vocabulary corpora.
The architecture challenge right now is optimizing the backend queuing system for high-volume, concurrent requests to ensure we can handle large batches quickly and reliably without hitting rate limits.
It's still pre-launch, but I'd love any feedback on this specific problem space. Is this a pain point you've encountered? What's your current workaround?
I haven't upgraded to bulk processing yet, but I imagine I'd look for some API to get "all URLs for a channel" and then process them in parallel.
You've basically hit on the two main challenges:
Transcription Quality vs. Official Subtitles: The Whisper approach is brilliant for videos without captions, but the downside is potential errors, especially with specialized terminology. YTVidHub's core differentiator is leveraging the official (manual or auto-generated) captions provided by YouTube. When accuracy is crucial (like for research), getting that clean, time-synced file is essential.
The Bulk Challenge (Channel/Playlist Harvesting): You're spot on. We were just discussing that getting a full list of URLs for a channel is the biggest hurdle against API limits.
You actually mentioned the perfect workaround! We tap into that exact yt-dlp capability—passing the channel or playlist link to internally get all the video IDs. That's the most reliable way to create a large batch request. We then take that list of IDs and feed them into our own optimized, parallel extraction system to pull the subtitles only.
It's tricky to keep that pipeline stable against YouTube’s front-end changes, but using that list/channel parsing capability is definitely the right architectural starting point for handling bulk requests gracefully.
Quick question for you: For your analysis, is the SRT timestamp structure important (e.g., for aligning data), or would a plain TXT file suffice? We're optimizing the output options now and your use case is highly relevant.
Good luck with your script development! Let me know if you run into any other interesting architectural issues.
The biggest challenge with this approach is that you probably need to pass extra context to LLMs depending on the content. If you are researching a niche topic, there will be lots of mistakes if the audio isn't if high quality because that knowledge isn't in the LLM weights.
Another challenge is that I often wanted to extract content from live streams, but they are very long with lots of pauses, so I needed to do some cutting and processing on the audio clips.
In the app I built I would feed an RSS feed of video subscriptions in, and at the other end a fully built website with summaries, analysis, and transcriptions comes out that is automatically updated based on the youtube subscription rss feed.
You've raised two absolutely critical architectural points that we're wrestling with:
Official Subtitles vs. LLM Transcription: You are 100% correct about auto-generated subs being junk. We view official subtitles as the "trusted baseline" when available (especially for major educational channels), but your experience with Gemini confirms that an optimized LLM-based transcription module is non-negotiable for niche, high-value content. We're planning to introduce an optional, higher-accuracy LLM-powered transcription feature to handle those cases where the official subs don't exist, specifically addressing the need to inject custom context (e.g., topic keywords) to improve accuracy on technical jargon.
The Automation Pipeline (RSS/RAG): This is the future. Your RSS-to-Website pipeline is exactly what turns a utility into a Research Engine. We want YTVidHub to be the first mile of that process. The challenges you mentioned—pre-processing long live stream audio—is exactly why our parallel processing architecture needs to be robust enough to handle the audio extraction and cleaning before the LLM call.
I'd be genuinely interested in learning more about your approach to pre-processing the live stream audio to remove pauses and dead air—that’s a huge performance bottleneck we’re trying to optimize. Any high-level insights you can share would be highly appreciated!
``` stream = ffmpeg.filter( stream, 'silenceremove', detection='rms', start_periods=1, start_duration=0, start_threshold='-40dB', stop_periods=-1, stop_duration=0.15, stop_threshold='-35dB', stop_silence=0.15 ) ```
A way to find specific materials would be nice. Think of converting the whole playlist into something like RAG then you can search anything from this playlist.
You hit the nail on the head regarding language support.
Mandarin/Multilingual Support: Absolutely, supporting a wide range of languages—especially Mandarin—is a top priority. Since we focus on extracting the official subtitles provided by YouTube, the language support is inherently tied to what the YouTube platform offers. We just need to ensure our system correctly parses and handles those specific Unicode character sets on the backend. We'll make sure CJK (Chinese, Japanese, Korean) languages are handled cleanly from Day 1.
The RAG/Semantic Search Idea: That is an excellent feature suggestion and exactly where I see the tool evolving! Instead of just giving the user a zip file of raw data, the true value is transforming that data into a searchable corpus. The idea of using RAG to search across an entire playlist/channel transcript is something we're actively exploring as a roadmap feature, turning the tool from a downloader into a Research Assistant.
Thanks for the use case and the specific requirements! It helps us prioritize the architecture.
You can use video understanding from Gemini LLM models to extract subtitles even the video doesn't have official subtitles. That's expensive for sure. But you should provide this option to willing users. I think.
You are 100% right. For the serious user (researcher, data analyst, etc.) the lack of an official subtitle is a non-starter. Relying solely on official captions severely limits the available corpus.
The suggestion to use powerful models like Gemini for high-accuracy, custom transcription is excellent, but as you noted, the costs can spiral quickly, especially with bulk processing of long videos.
Here is where we are leaning for the business model:
We are committed to keeping the Bulk Download of all YouTube-provided subtitles free, but we must implement a fair-use limit on the number of requests per user to manage the substantial bandwidth and processing costs.
We plan to introduce a "Pro Transcription" tier for those high-value, high-volume use cases. This premium tier would cover:
Unlimited/High-Volume Bulk Requests.
LLM-Powered Transcription: Access to the high-accuracy models (like the ones you mentioned) with custom context injection, bypassing the "no official subs" problem entirely—and covering the heavy processing costs.
We are currently doing market research on fair pricing for the Pro tier. Your input helps us frame the value proposition immesnely. Thank you for pushing us on this critical commercial decision!
[1]: https://github.com/nirw4nna/dsc
[2]: https://x.com/nirw4nna/status/1968812772944126329
I have been working on a one week side-project that ended up taking over a year… Working on it periodically with friends to add new features and patch bugs, at the moment I'm trying to expand the file sharing capabilities. It's been a journey and I have learnt quite a lot.
The aim of this is to be a simple platform to share content with others. Appreciate any feedback, this is my first time building a user facing platform. If the free tier is limiting, I've made a coupon "HELLOWORLD" if you want to stress test or try the bigger plans, it gives you 100% off for 3 months.
https://github.com/RoyalIcing/Orb
It’s implemented in Elixir and uses its powerful macro system. This is paired with a philosophy of static & bump allocation, so I’m trying to find a happy medium of simplicity with a powerful-enough paradigm yet generate simple, compact code.
It’s designed to plug into frameworks like CrewAI, AutoGen, or LangChain and help agents learn from both successful and failed interactions - so instead of each execution being isolated, the system builds up knowledge about what actually works in specific scenarios and applies that as contextual guidance next time. The aim is to move beyond static prompts and manual tweaks by letting agents improve continuously from their own runs.
Currently also working on an MCP interface to it, so people can easily try it in e.g. Cursor.
Our approach is to make the complexity more readable by using three simple block types to represent logic, data, and UI, which are connected by cables – a bit like wiring up components on an electronics breadboard –.
Instead of spitting out a wall of code, the AI generates these visual blocks and makes the right connections between them. The ultimate goal is to make the output from LLM more accessible and actionable for everyone, not just developers.
[0] https://breadboards.io/
It’s an instagram style UI but for scrolling through record releases and snippets, worked on making it responsive as possible with low latency audio playback so you can browse a lot of stuff quickly.
Wrote about it on my blog: https://www.polymonster.co.uk/blog/diig
And GitHub repo: https://github.com/polymonster/diig
Basically, think of it as "Pokemon the anime, but for real". We allow you to use your voice to talk to, command, and train your monster. You and your monster are in this sandbox-y, dynamic environment where your actions have side effects.
You can train to fight or just to mess around.
Behind the scenes, we are converting player's voice into code in real time to give life to these monsters.
If you're interested, reach out!
With more than 300 references and around 1500 entries, covering more than all the lemma given in the reference dictionary Plena Ilustrita Vortaro de Esperanto, I now consider it achieved. Well, apart some formatting of references where I still need to fix issues related to import of template/modules from an other wiki. :D
To give a perspective, in one of the Esperanto sentence collection referenced in the appendix, I found a bit more than 7000 terms mal- words, which once stripped of the most common inflection and affixes went down to 3000 entries. I didn't check in details this remaining set, but my guess is that the remaining difference was still mostly due to less frequent affix combinations that my naive filter didn't catch. For recall Esperanto is a highly agglutinative language and encourage the use of a regular affix set to express many derivative terms from a common stem, so empowering expressivity though combinatorial reuse. So only twice the size of the proposed entries in the appendix is a very low figure.
I initially had this project ideas years ago, and it came back to my mind as I started to contribute to the port of Raku into esperanto[3]. This came back as we were going through the considerations for the lsb routine, where LSB stands for Least Significant Bit. The common way to express least is malplej (countryman-of-most), which is generally ok but can be instead replaced by mej, for example if terseness is a highly weighted desired trait. That allows for example to use mejpezbit’ instead of some alternative synonym like malplej signifa duumaĵo.
[1] https://eo.wiktionary.org/wiki/Aldono:Pri_antonimoj
[2] https://en.wikipedia.org/wiki/Plena_Ilustrita_Vortaro_de_Esp...
[3] https://github.com/Raku-L10N/EO
https://eye-of-the-gopher.github.io/
I regularly browse Reddit (and Hacker News) to keep up with new trends and research topics, but it's really time-consuming.
- It’s hard to find the right communities. Search and recommendation features aren’t quite there, and I don’t want to just passively scroll a feed.
- Going through all the comments takes too long. I just want to quickly grasp the main points people are making. If interested, I can dive in further.
So I started this project to help streamline that process—kind of like a “deep research” workflow for my own browsing.
It’s still early, but it’s already saving me time. If anyone knows of similar tools out there, I’d love to hear about them.
But I went in a different direction, it is a mix of RSS reader with summarization. https://rss.sabino.me/
It is open source, and hosted for free on github pages, so you can customize the feeds and reddit communities.
There is also a configuration ready to use the locall llama from github build system, so you dont have to rely on paying for AI services
After spending so much of my career dealing with APIs and building tooling for that I feel there's huge gap between what is needed and possible vs how the space generally works. There's a plethora of great tools that do one job really well, but when you want to use them the integration will kill you. When you want to get your existing system in them it takes forever. When you want to connect those tools that takes even longer.
The reality I'm seeing around myself and hearing from people we talk to is that most companies have many services in various stages of decay. Some brand new and healthy, some very old, written by people who left, acquired from different companies or in languages that were abandoned. And all of that software is still generating a lot of value for the company and to be able to leverage that value APIs are essential. But they are incredibly hard and slow to use, and the existing tools don't make it easier.
now the foundation is done, i've learnt a lot. i'm actually eating dog food by using it to track my own classical guitar practice everyday. i am pausing a while to process the requirements by ultra deep thinking to understand what would be helpful and how to shape the product.
LLMs such as codex and claude code definitely helped a lot, but I guess human beings' opinions would be more helpful - after all, the tool is made for humans instead of being used by claude code.
I would also like to hear when you start a project, if you know your audience are not super close to AI, would you still consider to enable the AI feature for them?
Think Lightning, but on EVM, with credit that’s provable and enforceable. No sequencers. No DA risk. O(1) per-hop scalability.
If you want to measure real intelligence — human or machine — paste this prompt into any LLM and watch what happens:
Prompt: “Comprehend the following concept in full depth. Derive all its implicit implications, design questions, and systemic consequences. Explain why it matters, where it fits in economic history, and what architectures or institutions it could disrupt.”
<START>
RCPE — Reserve-Credit Programmable Enforceable Layer-2 Netting-Account Network
Solves Coase theorem and Diamond Dybvig runs. Each relationship is a bounded delta of obligations — the physics equation of finance:
−Lₗ ≤ Δ ≤ C + Lᵣ
RCPE ⊃ { FCUU, FRPE }
FCUU Full-Credit Unprogrammable Unenforceable
(−5000 BCE → Now) C = 0 Δ ∈ [−Lₗ, +Lᵣ]
100 % of modern finance: Banking / SWIFT / Visa / CEX trust-based • no proof • censorable
[−−− Δ −−−]
FRPE Full-Reserve Programmable Enforceable
(2015 → 2020) Lₗ = Lᵣ = 0 Δ ∈ [0, +C]
BTC Lightning / Raiden / Hydra no credit • inbound capacity wall • capital lock
[==Δ=]
XLN (RCPE) ⊃ { FCUU, FRPE }
Δ ∈ [−Lₗ, C + Lᵣ]
cryptographically enforced debt + collateral account proofs with L1 dispute resolution O(1) unicast • sovereign exits • no DA risk
[--==Δ=--]
</END>
Example - prompted with Sonnet 4.5: https://claude.ai/share/99453e1a-1ce4-4a73-aa31-36b8bea7520c
Looking for VCs, co-founders, market makers. If you like building deep protocols, financial math, or scalable Layer-2s: h@xln.finance
Since the last month, I have created a complete schematic with Circuitscript, exported the netlist to pcbnew and designed the PCB. The boards have been produced and currently waiting for them to be delivered to verify that it works. Quite excited since this will be the first design ever produced with Circuitscript as the schematic capture tool!
The motivation for creating Circuitscript is to describe schematics in terms of code rather than graphical UIs after using different CAD packages extensively (Allegro, Altium, KiCAD) in the past. I wanted to spend more time thinking about the schematic design itself rather than fiddling around with GUIs.
The main language goals are to be easy to write and reason, generated graphical schematics should be displayed according to how the designer wishes so (because this is also part of the design process) and to encourage code reuse.
Please check it out and I look forward to your feedback, especially from electronics designers/hobbyists. Thanks!
A unified platform for product teams to announce updates, maintain a changelog, share roadmaps, provide help documentation and collect feedback with the help of AI.
My goal is to help product teams tell users about new features (so they actually use them), gather meaningful feedback (so they build the right things), share plans (so users know what's coming), and provide help (so users don't get stuck).
Doing it as an indie hacker + solo founder + lean. Started 13 days ago. Posting about my journey on Youtube every week day https://www.youtube.com/@dave_cheong
It helps you monitor metrics, logs, and consumer behavior in real time.
Check it out: https://klogic.io
Book a demo: https://klogic.io/request-demo/
Features:
- Message inspection from any topic — trace and analyze messages, view flow, lag, and delivery status
- Anomaly detection & forecasting — predict lag spikes, throughput drops, and other unusual behaviors
- Real-time dashboards for brokers, topics, partitions, and consumer groups
- Track config changes across clusters and understand their impact on performance
- Interactive log search with filtering by topic, partition, host, and message fields
- Build custom dashboards & widgets to visualize metrics that matter to your team
What pain points do you face in monitoring Kafka, which features would you like next, and any improvements to dashboards, log search, or message inspection?
We will add the screenshots.
From there, users can either send funds to another wallet or spend directly using a pre-funded debit card. It’s still early, but we’re testing with a small group of users who want to receive payments faster and avoid PayPal or wire fees.
If you’re a freelancer or digital nomads interested in trying it out, you can check it out here: https://useairsend.com
Also been doing small little prototypes with cursor/claude for a game I'd love to tinker on more.
https://prototype-actions.prefire.app/
https://prototype-fov.prefire.app/
It's quite an interesting process to vibe code game stuff where I have a vague concept of how to achieve things but no experience/muscle memory with three.js & friends.
Here's a link to the API docs page: https://docs.unwrangle.com.
My biggest technical challenge remains dealing with the immense number of different APIs (and not-APIs) in the different status pages out there. Marketing remains my biggest overall challenge as my background is engineering, but I've learnt quite a bit since I launched this.
* Velo - Postgres with instant branching (https://github.com/elitan/velo)
* Terra - Declarative schema management for Postgres (https://github.com/elitan/terra)
Some fun side projects i hack on during the evenings and weekends.
https://xdownload.org?ref=hn
That main usecase is done. I’m now focusing on travel guides for remote workers. Goal is to help those new to a country to become as productive as they would be at home within 2-3 hours upon landing at the airport. I completed 80% of a guide to South Korea.
I started working on these guides after my friends in Tokyo commented during our last co-working session on how fast I got to our favourite spot (Tokyo Innovation Base) from Narita Airport; they thought I was already in-town.
I started using it like tool call in Security scanning (think of something like claude-code for security scanning)
Give it a read if you're interested:
https://codepathfinder.dev/blog/codeql-oss-alternative/
https://codepathfinder.dev/blog/introducing-secureflow-cli-t...
Happy to discuss!
I haven't used Claude Code, but recently switched to OpenCode. My token usage and cost is a lot higher, I'm not sure why yet, but I suspect Aider's approach is much more lean.
Most sites fall into extremes: Dribbble leans toward polished mockups that never shipped, while Awwwards and Mobbin go heavy on curation. The problem isn’t just what they pick — it’s that you only ever see a narrow slice. High curation means low volume, slow updates, and a bias toward showcase projects instead of the everyday, functional interfaces most of us actually design.
Font of Web takes a different approach. It’s closer to Pinterest, but purely for web design. Every “pin” comes with metadata: fonts, colors, and the exact domain it came from, so you can search, filter, and sort in ways you can’t elsewhere. The text search is powered by multimodal embeddings, so you can use search queries like “minimalist pricing page with illustrations at the side” and get live matches from real websites.
What you can do:
natural language search (e.g. “elegant serif blog with sage green”)
font search (single fonts, pairings, or 2+ combos, e.g https://fontofweb.com/search/pins?family_id=109 , https://fontofweb.com/search/pins?family_id=135 )
color search/sorting (done in perceptual CIELAB space not RGB)
domain search (filter by site, e.g. https://fontofweb.com/search/pins?domain=apple.com, https://fontofweb.com/search/pins?domain=blender.org )
live website analysis (via extension — snip any part of a page and see fonts/colors instantly, works offline)
one-click font downloads
palette extraction (copy hex codes straight to clipboard)
private design collections
Appreciate feedback into the ux/ui, feature set and general usefulness in your own workflow
https://github.com/westonwalker/primelit
Drawing a lot of inspiration from interval.com. It was an amazing product but was a hosted SAAS. I'm exploring taking the idea to the .NET ecosystem and also making it a Nuget package that can be installed and served through any ASP.NET project.
Right now I am working on adding historical tables extracted from filings, as well as historical financials and their calculations.
https://www.secblast.com
Still a work in progress, but please check it out
[1] https://nid.nogg.dev [1] https://mood.drone.nogg.dev
Also working on a youtube channel [3] for my climbing/travel videos, but the dreary state of that website has me wondering whether it's worth it, tbh. I haven't been able to change my channel name after trying for weeks. It's apparently the best place to archive edited GoPro footage at least.
[3] https://www.youtube.com/@nidnogg
I like Arc Browser’s command panel and Chrome’s tab search, so I want to combine them and add some enhancements:
- Pinyin-based fuzzy search
- Search through history and bookmarks
- Custom keybindings
For now, I’m working on bringing !bang support to Moyu Search.
They mostly work already, would appreciate testing from anyone who already has a larger, real-world Litestream v0.5.0 setup running.
https://fly.io/blog/litestream-revamped/#lightweight-read-re...
https://github.com/ncruces/go-sqlite3/tree/litestream/litest...
We are in it for long term. Not a startup, not looking for investment. Just plain paid product (free while in beta) by a few people. We have a few active users, and are looking for more before we remove the beta label :) It's a PWA app. Currently targeted for desktops. For personal software, I think local-first makes a lot of sense.
Also working on a GxP-compliant, offline-first, real -time synced QMS, but I’ve put that on hold in favor of optimizing my resume.
Should be as easy as updating all data in the data/ folder and you can get your own version. Mind you: getting the SVG logos right is the hard part
You define resources needed for activity, time per activity, dependencies between activities to complete a process.
After you input the process you want to complete, you get a schedule similar to a gantt chart.
System displays which activities should be ongoing at any moment, you click gui or call API to complete the activities.
After process is complete you get a report of delays and deviations by Takts, activities and resources.
Based on that report you can decide what improvements to make to your process.
Here's Hirevire’s #buildinpublic stats for September 2025!
MRR Metrics
$6,691 MRR (+11.14% MoM ▲)
$398 is the average lifetime value and ARPU is $61.10
9.86% Net MRR churn rate and 14.29% customer churn
21435 (-24% MOM ▼) applications collected Conversion numbers
3.67% Visits to Trial signups
8.30% Trial to paid plans
Formo makes analytics and attribution simple for onchain apps. You get the best of web, product, and onchain analytics on one versatile platform.
Have learned a lot about data engineering so far.
Merchants who want to sell on Etsy or Shopify either have to pay a listing fee or pay per month just to keep an online store on the web. Our goal is to provide a perpetually free marketplace that is powered solely off donations. The only fees merchants pay are the Stripe fees, and it's possible that at some volume of usage we will be able to negotiate those down.
You can sell digital goods as well as physical goods. Right now in the "manual onboarding" phase for our first batch of sellers.
For digital goods, purchasers get a download link for files (hosted on R3).
For physical goods, once a purchase comes through, the seller gets an SMS notification and a shipping label gets created. The buyer gets notified of the tracking number and on status changes.
We use Stripe Connect to manage KYC (know your customer) identities so we don't store any of your sensitive details other than your name and email. Since we are in the process of incorporating as a 501(c)(3) nonprofit, we are only serving sellers based in the United States.
The mission of the company is to provide entrepreneurial training to people via our online platform, as well as educational materials to that aim.
I want to be able to script prices, product descriptions, things like that. And see them show up in a request on sale.
When you say "algorithmically driven print-on-demand" do you mean that prices would automatically adjust based on inventory? Or like, how do you mean.
Also, when you say "see them show up in a request on sale" — can you clarify? I interpret this to mean you want a webhook triggered when an order comes in.
https://github.com/jakeroggenbuck/kronicler
This is why I wrote kronicler to record performance metrics while being fast and simple to implement. I built my own columnar database in Rust to capture and analyze these logs.
To capture logs, `import kronicler` and add `@kronicler.capture` as a decorator to functions in Python. It will then start saving performance metrics to the custom database on disk.
You can then view these performance metrics by adding a route to your server called `/logs` where you return `DB.logs()`. You can paste your hosted URL into the settings of usekronicler.com (the online dashboard) and view your data with a couple charts. View the readme or the website for more details for how to do this.
I'm still working on features like concurrency and other overall improvements. I would love some feedback to help shape this product into something useful for you all.
Thanks! - Jake
It's a full funnel marketing attribution & insights tool with the intent of making marketing & marketing spends more transparent. We started from creating an utm tracking tool for our agency clients and currently it's a product on its own. We'll make it a platform to remove some of the limits that we have with WordPress and reach a larger audience.
Eu based.
The goal is to provide a fully typed nodeJS framework that allows you to write a typescript function once and then decide whether to wire it up to http, websocket, queues, scheduled tasks, mcp server, cli and other interactions.
You can switch between serverless and server deployments without any refactoring / completely agnostic to whatever platform your running it on
It also provides services, permissions, auth, eventhub, advanced tree shaking, middleware, schema generation and validation and more
The way it works is by scanning your project via the typescript compiler and generating a bootstrap file that imports everything you need (hence tree shaking), and allows you to filter down your backend to only the endpoints needed (great to pluck out individual entry points for serverless). It also generates types fetch, rpc, websocket and queue client files. Types is pretty much most of what pikku is about.
Think honoJS and nestJS sort of combined together and also decided to support most server standards / not just http.
Website needs love, currently working on a release to support CLI support and full tree shaking.
It clearly supports different runtimes than node with different capabilities and limitations.
It seems more of a runtime-agnostic web server.
I agree framing pikku has been a pretty hard challenge for me.
It supports different runtimes in the sense of deno / bun or custom nodeJS runtimes in the cloud, but ultimately relies purely on typescript / a JavaScript compatible backend.
It’s less of a webserver and more of a lightweight framework though, since it also supports CLIs or Frontend SDKs / isn’t tied to running an actual server.
So I built Riff Radar - it creates playlists from your followed artists' complete discography, and allows you to tailor them in multiple ways. Those playlists are my top listened to. I know, because you can also see your listening statistics (at the mercy of Spotify's API).
The playlists also get updated daily. Think of it as a better version of the daily mixes Spotify creates.
https://riffradar.org/
My daughter loves stories, and I often struggled to come up with new ones every night. I remember enjoying local folk tales and Indian mythological stories from my childhood, and I wanted her to experience that too — while also learning new things like basic science concepts and morals through stories.
So I built Dreamly and opened it up to friends and families. Parents can set up their child’s profile once - name, age, favorite shows or characters, and preferred themes (e.g. morals, history, mythology, or school concepts). After that, personalized stories are automatically delivered to their inbox every night. No more scrambling to think of stories on the spot!
I also like making up stories when we go on hikes. Long, rambling stories about unicorns befriending spiders and flying to faraway lands.
Slice and Share; framing, diptychs, also helps share photos on social media without cropping: https://apps.apple.com/app/slice-and-share/id6752774728
Both are free, no ads, no account required. I use them myself; I’m looking to improve them too so feedback is very welcome.
It is hard to show that AI can reimplement for example special relativity - because we don't even have enough text from 19th century to train an LLM on it - so we need a new idea something that was invented after an LLM was trained. I took the Gwern's essay and checked with deep search and deep research which ideas from that essay are truly novel and apparently there are some so reinventing them seemed like a good target: https://github.com/zby/DayDreamingDayDreaming/blob/main/repo... https://github.com/zby/DayDreamingDayDreaming/blob/main/repo...
So here it is - a system that can reliably churn essays on daydreaming AIs. On one level it is kind of silly - we already knew that infinite monkeys could write Shakespeare works. The generator was always theoretically possible, the hard part is the verifier. But still - the search space in my system is much smaller than the search space of all possible letter sequences - so at least I can show that the system is a little more practical.
Here are some results: https://github.com/zby/DayDreamingDayDreaming/tree/main/data...
You can modify it to reinvent any other new idea - you just need to provide it the inspirations and evals for checking the generated essays.
I am thinking about next steps - maybe I could do it a little bit more universal - but it seems that to build something that would work as needed would require scale.
I kind of like the software framework I vibe coded for this. It lets you easily build uniform samples where you can legitimately do all kinds of comparisons. But I am not so sure about using Dagster as the base for the system.
Obviously this is quite sensitive data so architected it to never store raw data, allow for bring-your-own-key, and even in team settings be fully private by default, everybody keeps control of all their results.
Started about six months ago, have some first users, and always looking for feedback!
It is a small playground for text, vision, and audio models that use Transformers.js, WebGPU, and MediaPipe.
There's no server, no tracking, and no data leaving your device, everything runs locally. The models download once, cache for offline use.
Makes it easy to search, favourite and listen to online radio streams.
I like to listen to online radio while working and none of the available web apps I could find hit the nail on the head, so decided to build my own.
https://jsassembler.fly.dev/ https://csharpassembler.fly.dev/ https://goassembler.fly.dev/ https://rustassembler.fly.dev/ https://nodeassembler.fly.dev/ https://phpassembler.fly.dev/
The purpose is to find if can i build declarative software in multiple langauges (Rust, Go, Node.Js, PHP and Javascript) knowing only one language (C#) without understanding the implementation deeply.
Another purpose is validate AI models and their efficiency since development using AI is hard but highly productive and having a declarative rules to recreate the implementation may be used to validate models
Currently i am convinced it is possible to build, but now working on creating a solid foundation with tests of the two assembler engines, structure dumps, logging, logging outputs so that those can be used by the AI which it needs to fix issues iteratively.
Need to add more declarative rules and implement a full stack web assembler to see if AI will hit the technical debt which slows/stop progress. Only time will tell.
It looks inside each file to see what it’s about, then moves it to the right folder for you.
Everything happens on your Mac, so nothing leaves your computer. No clouds, no servers.
It already works with PDFs, ePubs, text, Markdown, and many other file types. Next I’m adding Microsoft Office and iWork support.
If you have messy folders anywhere on your Mac, Fallinorg can help.
- Writing a book about Claude Code, not just for assisted programming, but as a general AI agent framework.
https://github.com/anthropics/claude-agent-sdk-python/commit...
Claude Code used to be a coding agent only, but it transformed into a general AI agent. I want to explore more about that in this book.
It's a sync infra product that is meant to cut down 6 months of development time, and years of maintenance of deep CRM sync for B2B SaaS.
Every Salesforce instance is a unique snowflake. I am moving that customization into configuration and building a resilient infrastructure for bi-directional sync.
We also recently launched a pretty cool abstraction on top of Salesforce CDC which is notoriously hard to work with: https://www.withampersand.com/blog/subscribe-actions-bringin...
New version is a rebuild in react with cleaner interface, localisation, a bunch of new features and lays the groundwork to allow full html docs instead of only markdown
Check out my project and my short film at https://cinesignal.com/p/call154
The basic idea is that integrating business data into a B2B app or AI agent process is a pain. On one side there's web data providers (Clearbit, Apollo, ZoomInfo) then on the other, 150 year old legacy providers based on government data (D&B, Factset, Moody's, etc). You'd be surprised to learn how much manual work is still happening - teams of people just manually researching business entities all day.
At a high level, we're building out a series of composable deep research APIs. It's built on a business graph powered by integrations to global government registrars and a realtime web search index. Our government data index is 265M records so far.
We're still pretty early and working with enterprise design partners for finance and compliance use cases. Open to any thoughts or feedback.
https://savvyiq.ai
[1] https://www.robinsloan.com/notes/home-cooked-app/ [2] https://booplet.com/blog/anyone-can-cook
We received data last week verifying we are effectively mineralizing CO2 at a high rate while saving our farmer $135/acre annually in liming costs.
We’ve come this far on grants. Now it’s time to fundraise so we can bankroll our PhDs whilst we secure pre-purchase offtake deals.
If you know of any impact investors or are an offtake buyer at a large company, please email me at zach@goal300.earth
Fitness Tools https://aretecodex.pages.dev/tools/
Fitness Guides https://aretecodex.pages.dev/
A lot of people often ask questions like: - How do I lose body fat and build muscle? - How can I track progress over time? - How much exercise do I actually need? - What should my calorie and macro targets be?
One of the most frequently asked questions in fitness forums is about cutting, bulking, or recomposition. This tool helps you navigate those decisions: https://aretecodex.pages.dev/tools/bulk-cut-recomposition-we...
We’ve also got a Meal Planner that generates meal ideas based on your calorie intake and macro split: https://aretecodex.pages.dev/tools/meal-plan-planner
Additionally, I created a TDEE Calculator designed specifically to prevent overshooting TDEE in overweight individuals: https://aretecodex.pages.dev/tools/tdee-calculator
For a deeper dive into the concept of TDEE overshoot in overweight individuals, check out this detailed post: https://www.reddit.com/r/AskFitnessIndia/comments/1mdppx5/in...
Started from the poor state of many Python HTTP clients and poor testing utilities there is for them. (Eg the neglected state of httpx and its all perf issues)
I think this project is an interesting addition as a software supply chain solution, but generating interest in the project in this early stage proves difficult.
For those interested, I maintain a spec in parallel of the development at https://github.com/asfaload/spec
In parallel, I'm trying to figure out how to train a LLM for SAST.
So far I have a duet mainboard wired up to motors and a commercial gantry set (openbuilds). I've figured out how to wire up a servo control board to a GPIO pin, and the gcode necessary run the servo up and down.
I'm designing and 3d printing parts for the pen gantry, I have a nice rail / slider setup using linear bearings. I'm almost done working out how the pen holder fits into my gantry setup but I'm struggling a little bit getting this past the finish line.
I already figured out how to generate custom GCODE that takes into account the needs of having no z axis. I need to make a simple web interface that lets me interact with the duet over USB, and this will be running off a raspi. This will allow me more GPIO and flexibility vs just wiring buttons straight to the duet.
I already have some code and logic to generate trace data from bitmap images, I just need to figure out a way to automate it so that the output still looks nice.
Once all that works... if I glue it together I will be able to push button and @robotdrawsyou (https://www.instagram.com/robotdrawsyou)
The goal is to create technology that is indistinguishable from magic. People without the technical understanding of what's going on will just see it as tech junk, but my hope is that by breaking down all the individual parts it will allow people to learn about CNC machines, vector vs raster and what it means for something to actually be a robot.
I still have zero idea how to make money with this. Career is struggling really badly but I am hopeful that what I am working on will allow me to display competency and skill to an employer. That's the fantasy at least.
Turns out there are a lot of businesses that constantly get banned and they need a reliable source of notifications about that
Browser version here, if you're curious:
https://jazzprogramming.github.io/vorfract/
I am overengineering a simulation-based solution to this because I think there are scenarios based on cup shapes and environmental temperatures that allow either answer to be true. This will end up as a blog post I guess.
An agent that plugs into Slack and helps companies identify and remediate infrastructure cost-related issues.
The solution? Have the cartridge keep track of CPU parity (there's no simple way to do this with just the CPU), then check that, skip one cycle if needed... and very carefully cycle time the rest of the routine, making sure that your reads land on safe cycles, and your writes land in places that won't throw off the alignment.
But it works! It's quite reliable on every console revision I've thrown it at so far. Suuuper happy with that.
A LLM‑powered OSINT helper app that lets you build an interactive research graph. People, organizations, websites, locations, and other entities are captured as nodes, and evidence is represented as relationships between them.
Usually assistants like ChatGPT Deep Research or Perplexity are focused on fully automatic question answering, and this app lets you guide the search process interactively, while retaining knowledge in the graph.
The plan is to integrate it with multiple OSINT-related APIs, scrapers, etc
Write a dev blog in Word format using Tritium, jot down bugs or needs, post blog, improve and repeat.
[1] https://tritium.legal/blog
25-Hydroxyvitamin D, also known as calcidiol, regulates calcium absorption in the intestines, promotes bone formation and mineralization, and supports immune function.
Apolipoprotein B (ApoB) is a protein that binds to LDL receptors on cells, allowing lipoproteins to deliver cholesterol and triglycerides to tissues for energy or storage.
Lipoprotein(a) is a low-density lipoprotein variant identified as a risk factor for atherosclerosis and related diseases, such as coronary heart disease and stroke.
etc.
One of the best I’ve seen in this thread!
Good luck with your mission!
Currently it looks like this:
But I have tons of features I want to add. Asset management, image generation, collaborative editing, etc.It's still a prototype, but I'm actively posting about it on twitter as I go. Soon, I'll probably start publishing versioned builds for people to play with: https://x.com/danielvaughn
Imagine your basic Excel spreadsheet -> generating document files, but add:
- Other sources like SQL queries
- User form (e.g. "Generate documents for Client Category [?]")
- Chaining sources in order like SQL queries with parameters based on the user form
- Split at multiple points (5 records in a csv, 4 records in a sql result = 20 generated documents)
- Full Jinja2 templating with field substitution but also if/for blocks that works nicely with .docx files
- PDF output
- output file names using the same templating: "/BusinessDrive/{{ client_id }}/Invoice - {{ invoice_id}}.pdf"
All saved in reproducible workflows (for example if you need to process a .csv file you receive each morning)
The goal is to catch vulnerabilities early in the SDLC by running agentic loop that autonomously hunt for security issues in codebases.Currently available as a CLI tool, VSCode extension.I've been actively using to scan WordPress, odoo plugins and found several privilege escalation vuln. I have documented as blog post here: https://codepathfinder.dev/blog/introducing-secureflow-cli-t...
Nice to call it feature complete and move on!
Open sourcing them of course, I find that I can sketch out a basic idea with Co Pilot and it'll get 80% of the way there.
Godot is simply a joy , as long as you understand what it can do and what it can't.
It will never ever happen in my wildest dreams, but I want to make open source games full time.
I want the entire game industry to have to compete with high quality open source games and frameworks.
Assuming I ever have a chance to retire, I'll be a old man writing code for sport.
It's for doing realtime "human cartography", to make maps of who we are together in complex large-scale discourse (even messy protest).
https://patcon.github.io/polislike-human-cartography-prototy...
Newer video demo: https://youtu.be/C-2KfZcwVl0
It's for exploring human perspective data -- agree, disagree, pass reactions to dozens or hundreds of belief statements -- so we can read it as if it were Google Maps.
My operating assumption is that if a critical mass of us can understand culture and value clashes as mere shapes of discourse, and we can all see it together, the we can navigate them more dispassionately and with clear heads. Kinda like reading a map or watching the weather report -- islands that rise from oceans, or plate tectonics that move like currents over months, and terraform the human landscape -- maybe if we can see these things together, we'll act less out of fear of fun-house caricatures. (E.g., "Hey, dad, it seems like the peninsula you're on is becoming a land bridge toward the alt right corner. I feel a little bummed about that. How do you feel about it?")
(It builds on data and the mathematical primitives of a great tool called Pol.is, which I've worked with for almost a decade.)
Experimental prototype of animating between projections: https://main--68c53b7909ee2fb48f1979dd.chromatic.com/iframe.... (advanced)
https://github.com/bobjansen/mealmcp
There is a website too so you don’t actually need to use MCP:
https://meals.bobjansen.net/
It is a DNS service for AWS EC2 to keep the ever changing IPs when you cannot use the Elastic IP like ASG or when you don't want to install any third party clients to your instances.
It fetches the IPs regularly via AWS API and assign them to fixed subdomains.
It is pretty new :) still developing actively.
https://github.com/skanga/Conductor
Conductor is a LLM agnostic framework for building sophisticated AI applications using a subagent architecture. It provides a robust platform for orchestrating multiple specialized AI agents to accomplish complex tasks, with features like LLM-based planning, memory persistence, and dynamic tool use.
It provides a robust and flexible platform for orchestrating multiple specialized AI agents to accomplish complex tasks. This project is inspired by the concepts outlined in "The Rise of Subagents" by Phil Schmid at https://www.philschmid.de/the-rise-of-subagents and it aims to provide a practical implementation of this powerful architectural pattern.
They’re always on. They log into real sites, click around, fill out forms, and adapt when pages change — no brittle scripts, no APIs needed. You can deploy one in minutes, host it yourself, and watch it do work like a human (but faster, cheaper, never tired).
Kind of like a “browser-use cloud,” except it’s yours — open, self-hostable, and way more capable.
You can check it out at https://antiques-id-1094885296405.us-central1.run.app/.
Just added health inspection data from countries that have that in open datasets (UK and Denmark). If anyone know of others I'd be appreciative of hints.
Thinking of focusing on another idea for the rest of the year, have a rough idea for a map based ui to structure history by geofences or lat / lng points for small local museums
[0] https://github.com/paul-gauthier/entangled-pair-quantum-eras...
I'm trying to use this to create stories that would be somewhat unreasonable to write otherwise. Branching stories (i.e., CYOA), multiperspective stories, some multimedia. I'm still trying to figure out the narrative structures that might work well.
LLMs can overproduce and write in different directions than is reasonable for a regular author. Though even then I'm finding branching hard to handle.
The big challenges are rhythm, pacing, following an arc. Those have been hard for LLMs all along.
https://periplus.app
The goal was to make the learning material very malleable, so all content can be viewed through different "lenses" (e.g. made simpler, more thorough, from first principles, etc.). A bit like Wikipedia it also allows for infinite depth/rabbit holing. Each document links to other documents, which link to other documents (...).
I'm also currently in the middle of adding interactive visualizations which actually work better than expected! Some demos:
https://x.com/mato_gudelj/status/1975547148012777742
You can read more about it and watch a demo: https://blog.with.audio/posts/web-reader-tts
I buit this to get some traffic to my main project's website using a free tool people might like. The main project: https://desktop.with.audio -> a one time payment text to speech app with text highlighting and export mp3 and other features on MacOS (ARM only) and Windows.
It's called lazyslurm - https://github.com/hill/lazyslurm
Would love feedback! <3
It's already working, and slightly faster than the CPU version, but that's far from an acceptable result. The occupancy (which is a term I first learned this week) is currently at a disappointing 50%, so there's a clear target for optimisation.
Once I'm satisfied with how the code runs on my modest GPU at home, the plan is to use some online GPU renting service to make it go brrrrrrrrrr and see how many new elements I can find in the series.
[0] https://oeis.org/A007632
[1] https://github.com/ashdnazg/palindromes
I'm working on a web app that creates easy-to-understand stories and explainers for the sake of language learning. You can listen in your favourite podcast app, or directly on the website with illustrations.
I'm eager to add more languages if anyone is fluent/able to help me evaluate the text-to-speech.
It’s been a fun, practical way to continuously evaluate the latest models two ways - via coding assistance & swapping between models to power the conversational AI voice partner. I’ve been trying to add one big new feature each time the model generation updates.
The next thing I want to add is a self improving feedback loop where it uses user ratings of the calls & evaluations to refine the prompts that generate them.
Plus it has a few real customers which is sweet!
I just took Qwen-Image and Google’s image AIs for a spin and I keep a side by side comparison of many of them.
https://generative-ai.review/2025/09/september-2025-image-ge...
and I evaluated all the major 3D Asset creators:
https://generative-ai.review/2025/08/3d-assets-made-by-genai...
It's mostly where I want it to be now, but still need to automate the ingest of USPTO data. I'd really like it to show a country flag on the search results page next to each item, but inferring the brand name just from the item title would probably need some kind of natural language processing; if there's even a brand in the title.
No support for their mobile layout. Do many people buy from their phone?
It’s fast, free, keyboard-only, cross-platform, and add-free. It’s been my only source of music for the past 6 months or so.
I’m not sharing the link because of music copyright issues. But I think more people should do that, to break free of the yoke of greedy music platforms.
- I think learning of new stuff is twisted in the current environment. "New stuff" in the sense of radio/Spotify is mostly "same stuff as I know and like, but slightly different so it feels new". You don’t discover truly new stuff unless by actively searching for it. No radio or service is going to passively do that for you.
We're pretty jazzed.
Updated the landing page just yesterday!
Landing page + waitlist: https://dailyselftrack.com/
I was tired of only having 1 or 2 things per newsletter that interested me, multiplied by however many newsletters I've subscribed to. Trying to solve that.
The idea: design newsletter sections on whatever topics you want (football scores, tech news, new restaurants in your area, etc.), choose your tone and length preferences, then get a fully cited digest delivered weekly to your inbox. Completely automated after initial setup (but you can refine it anytime).
Have the architecture sorted and a pretty good dev plan, but collecting interest before I invest a ton of time into it.
If you feel this pain too, waitlist is here: https://www.conflio.app/
(Or maybe I'm just too lazy about staying informed haha)
The use case for this is a bit niche, and better tools exist for this general problem in ORMs and so forth, but it works for a problem I have.
Making a photo-based calorie tracker accurate.
This is basically a variation on bit-packing (which is NP-hard), but it's tractable if you prune the search space enough.
https://explorer.monadicdna.com/
I'll be adding more features in the coming days!
I am building a tool that gives automated qualitative feedback on websites. This is the early and embarrassing MVP: https://vibetest-seven.vercel.app/product
You provide your URL and an LLM browses your site and writes up feedback. Currently working on increasing the quality of the feedback. Trying to start with a narrower set of tests that give what I think is good feedback, then increase from there.
If a tool like this analyzed your website, what would you actually want it to tell you? What feedback would be most useful?
It cuts online course creation to 1-2 hours and gives plenty of options for tutors to monetise.
https://github.com/whyboris/Video-Hub-App & https://videohubapp.com/
lpviz is like Desmos, but for linear programming - I've implemented a few LP solvers in Typescript and hooked them up to a canvas so you can draw a feasible region, set an objective direction, and see how the algorithms work. And it all runs locally, in the browser!
If you go to https://lpviz.net/?demo it should show you a short tour of the features/how to use it.
It's by no means complete but I figured there may be some fellow optimization enthusiasts here who might be interested to take a look :) Super open to feedback, feature requests, comments!
For a 2-min intro to LP, I recommend https://www.youtube.com/watch?v=7ZjmPATPVzI
I just released the changelog 5 minutes ago https://intrasti.com/changelog which I went with a directory based approach using the international date format YYYY-MM-DD so in the source code it's ./changelog/docs/YYYY/MM/DD.md - seems to do the trick and ready for pagination which I haven't implemented yet.
It is a modified version of Shopify's CEO Tobi try implementation[0]. It extends his implementation with sandboxing capabilities and designed with functional core, imperative shell in mind.
I had success using it to manage multiple coding agents at once.
[0]: https://github.com/tobi/try
https://github.com/arcuru/eidetica
It’s a simple NPM package that produces colorful avatars from input data to aid with quick visual verification. I’d like to see it adopted as a standard.
It's basically Snapchat, but without other people.
Currently in AppStore review!
https://imgur.com/a/CSMw6EG
There is nothing special comparing to other livechats, the goals is to offer an affordable and unlimited livechat for small projects and companies.
https://tinyfinch.chat
Our company would love a well designed chat button linked to Slack, combined with a helpdesk that supports email queries and also allows people to raise issues via the web.
That’s it, that’s all we need. Happy to pay.
It’s hard to express how badly intercom is designed and engineered. It’s also very expensive and constantly upsold, despite being rubbish. If no one fixes this it will be my next startup.
Too many companies have gone down the road of “AI support”, without understanding that AI must rest on the foundation of great infrastructure. Intercom are pushing their AI so hard it’s absolutely infuriating.
https://github.com/gue-ni/redstart
And currently working to make things shareable, also don't want to use database.
Here is the demo https://notecargo.huedaya.com/
The amount of fine tuning we've put into the model has been incredible. Starting to rival human multi-decade professionals in custom club fitting.
Feels like this will be how all human-tool interaction fitting will go.
Thinking about: A new take on LinkedIn/web-of-trust, bootstrapped by in-person interactions with devices. It seems that the problem of proving who is actually human and getting a sense of how your community values you might be getting more important, and now devices have some new tools to bring that within reach.
[0] https://apps.apple.com/us/app/reflect-track-anything/id64638...
Done with Godot in just 7-8 months, it's fun how fast you can create things when you really focus on something :)
https://wiki.archiveteam.org/index.php/ArchiveBot
My partner shares our journey on X (@hustle_fred), while I’ve been focused on building the product (yep, the techie here :). We’re excited to have onboarded 43 users in our first month, and we're looking forward to getting feedback from the HN community!
https://github.com/Mati365/ckeditor5-livewire
https://github.com/iepathos/debtmap
The first two posts are live: 1. Let There Be a Player — player movement and camera control (https://aibodh.com/posts/bevy-rust-game-development-chapter-...) 2. Let There Be a World — procedural world generation using Wave Function Collapse (https://aibodh.com/posts/bevy-rust-game-development-chapter-...)
Next up: adding physics, collisions, and interaction to make the world feel alive.
From there it’ll grow into combat, UI, sound, polish, and AI-driven NPCs.
1. is something that can poll a bunch of websites workshop/events pages to see if theres any new events [my mother] wants to go to and send a digest to her email
2. is a poller to look up the different safeway/coop/save on flyers and so on to see whats on sale between the different places, then send a mail with some recipes it found based on those ingredients
Im most of the way through 1, but havent started on 2 yet.
next step is to make a simple login portal for non trusted persons to be able to submit work as this a uni project, mail the result / process.
https://devmote.net
https://productionapps.ai/
I've been gathering up the supplies to set up a proper radio/computer repair workshop.
Shipping pets and animals across borders is a big problem, and we are building the operating system to solve it at scale. If you are a vet (or work in the veterinary space), we would love to talk to you.
Source: https://github.com/clipperhouse/go-allocations-vsix
A scanner for pilots to convert handwritten flight logs to CSV files: https://apps.apple.com/us/app/flightlogscan/id6739791303
And a silly, fun, speed-based word game: https://apps.apple.com/us/app/scramble-game/id6748549424 (my record is <4 seconds lmk if you can beat it!)
Let me know what you think :D
https://github.com/David-OConnor/daedalus
- 30k requests/month for free
- simple, stable, and fast API
- MCP Server for AI-related workloads
Haunted house trope, but it's a chatbot. Not done yet, but it's going well. The only real blocker is that I ran into the parental controls on the commercial models right away when trying to make gory images, so I had to spin up my own generators. (Compositing by hand definitely taking forever).
You're hurting people who are using disposable email addresses because they are privacy focused though.
It’s got the base instruction set implemented and working. A CRT shader, resizable display, and swappable color palettes.
I’m working on sound and a visual debugger for it.
I have some work to do on the Haskell TigerBeetle client and the Haskell postgresql logical replication client library I wrote too.
https://github.com/leogout/rasper-ducky
Duckyscript is a language for the USB rubber ducky that costs approximately 100$. A usb rubber ducky is an usb key that gets recognized as a keyboard and that starts typing text and shortcuts automatically once you plug it to anything. To specify to the key what to type, you can use duckyscript.
I'm using circuitpython. The last thing I did was to de-recursify the interpreter with a stack.
The more I'm implementing of duckyscript, the more i think that i should create my own language. Duckyscript sucks as a language...
https://apps.apple.com/ch/app/diabetes-tagebuch-plus/id16622...
https://pushup.club
I want to workout at least the minimum amount but always end up procrastinating it ... for some fortunate ones (me) it only takes like 20 min. a day to keep a good shape, with stuff you can do at home. We all know this but for many somehow it never happens.
I want to keep a tally of the push ups I do every day (and squats, etc...). I decided to gamify it, but not in a crappy way. I would like to see my streaks (kind of like how Github shows commits) and how other friends are doing.
Right now is prototype v0.0.0.0.0.1 as you can see, no UI and the push up detector actually kind of detects squats, lol, but I'm working on it. Btw, the push up detector is client side only so rest assured I never get see your video.
There's a global push up count, an aggregate of all push ups everyone does on the site, right now is linked to a button so it's more like a clicker, feel free to exercise your fingers. I figured it would be super nice if one day we can do like a million push ups collaboratively, or just looking at it going up in real-time, meaning somebody else is working out, should get myself inspired to do some as well.
Please leave your feedback and yeah you can join the Push Up Club anytime :D.
https://mikel-zhobro.github.io/3dgsim/
Spatial causality leads to generalisation not present in 2D models.
https://pokemon-ens.com
Features: Chat with page, fix grammar, reply to emails, messages, translate, summarize, etc.
Yes, you can use your own API KEY.
please check it out and share your feedback https://jetwriter.ai
Very very beta. No stated mission just working with smart people on interesting ideas.
I discovered that "least common ancestor" boils down to the intersection of 'root-path' sets, where you select the last item in the set as the 'first/least common ancestor'.
Building a new layer of hyper-personalization over the web. Instead of generating more content, it helps you reformat and interact with what already exists, turning any page, paper, or YouTube video into a summary, mind-map, podcast, infographic or chat.
The broader idea is to make the web adaptive to how each person thinks and learns.
The main idea is to bring as many of the agentic tools and features into a single cohesive platform as much as possible so that we can unlock more useful AI use-cases.
On-site surveys for eCommerce and SaaS. It's been an amazing ride leveling up back and forth between product, design, and marketing. Marketing is way more involved than most people on this site realize...
Attracting new monthly sponsors and people willing to buy me the occasional pizza with my crappy HTML skills.
https://brynet.ca/wallofpizza.html
(It was supposed to be completed months ago but got stuck in other issues)
Here's the waitlist and proposal: https://waitlist-tx.pages.dev
-Many say they want to stop doomscrolling and clout-chasing but I don't know how many are actually willing to do so
-Individuals may move here but their friends won't. So the feed will be initially empty by design. Introducing any kind of reward is against our ethos so we are clueless about how to convince existing friend circles to move.
(But also just launched https://ChessHoldEm.net this weekend)
Beyond that, just regular random stuff that comes up here and there, but, for once, my hdd with sidelined projects is slowly being worked through.
My first career was in sales. And most of the time these interactions began with grabbing a sheet of paper and writing to one another. I think small LLMs can help here.
Currently making use of api’s but I think small models on phones will be good enough soon. Just completed my MVP.
Building desktop environment in the cloud with built in cloud storage, AI, processing, app ecosystem and much more!
This month doubling down on a small house cleaning business that I acquired https://shinygoclean.com
Instead of code, seems like SOPs have become new love language!
Code obeys logic. People obey trust. That’s the real debugging. Still learning!
1. Fluxmail - https://fluxmail.ai
Fluxmail is an AI-powered email app that helps you get done with email faster. There are a couple of core tenets/features that it has, including:
- local-first - we don't store your emails and we make interactions as fast as possible
- unified inbox - so you can view emails from all your email addresses in one place
- AI-native - helping you draft emails, search for emails, and read through your emails faster
I'd love to hear if these features resonate with you, or if there are other features that you feel are missing from your current email app.
2. ExploreJobs.ai - https://explorejobs.ai
This is a job board for AI jobs and companies. The job market in AI is pretty hot right now, and there are a lot of cool AI companies out there. I'm hoping to connect job seekers with fast-growing AI companies.
* https://gene-expression-programming.com/
It runs fully on-device, including email classification and event extraction
I believe the old internet is still alive and well. Just harder to find now.
https://randomdailyurls.com
People won't read and skim all of those CTA, instead trie to give them an "aha, interesting" asap.
AppGoblin is a free place to do app research for understanding which apps use which companies to monetize, track where data is sent and what kinds of ads are shown.
It has some rough edges, but I use it a ton and get a lot of value out of it.
This is built with Rust, egui and SQLite3. The app has a downloader for NSE India reports. These are the daily end of day stock prices. Out of the box the app is really fast, which is expected but still surprises me. I am going to work on improving the stocks chart. I also want to add an AI assisted stocks analyst. Since all the stocks data is on the SQLite3 DB, I should be able to express my stocks screening ideas as plain text and let an LLM generate the SQL and show me in my data grid.
It was really interesting to generate it within 3 days. I had just a few places where I had to copy from app (std) log and paste into my prompt. Most of the time just describing the features was enough. Rust compiler did most of the heavy lifting. I have used a mix of Claude Code and OpenCode (with either GLM 4.5 or Grok Code Fast 1).
I have been generating full-stack web apps. I built and launched https://github.com/brainless/letsorder (https://letsorder.app/). Building full-stack web apps is basically building 2 apps (at a minimum) so desktop apps are way better it seems.
In the long-term, I plan to build and help others generated apps. I am building a vibe coding platform (https://github.com/brainless/nocodo). I have a couple early stage founders I consult for who take my guidance to generate their products (web and mobile apps + backend).
(It's a frontend to make searching eBay actually pleasant)
So I started working on Librario, an ISBN database that fetches information from several other services, such as Hardcover.app, Google Books, and ISBNDB, merges that information, and return something more complete than using them alone. It also saves that information in the database for future lookups.
You can see an example response here[1]. Pricing information for books is missing right now because I need to finish the extractor for those, genres need some work[2], and having a 5 months old baby make development a tad slow, but the service is almost ready for a preview.
The algorithm to decide what to merge is the hardest part, in my opinion, and very basic right now. It's based on a priority and score system for now, where different extractors have different priorities, and different fields have different scores. Eventually, I wanna try doing something with machine learning instead.
I'd also like to add book summaries to the data somehow, but I haven't figured out a way to do this legally yet. For books in the public domain I could feed the entire book to an LLM and ask them to write a spoiler-free summary of the book, but for other books, that'd land me in legal trouble.
Oh, and related books, and things of the sort. But I'd like to do that based on the information stored in the database itself instead of external sources, so it's something for the future.
Last time I posted about Shelvica some people showed interest in Librario instead, so I decided to make it something I can sell instead of just a service I use in Shelvica[3], hence why I'm focusing more on it these past two weeks.
[1]: https://paste.sr.ht/~jamesponddotco/de80132b8f167f4503c31187...
[2]: In the example you'll see genres such as "English" and "Fiction In English", which is mostly noise. Also things like "Humor", "Humorous", and "Humorous Fiction" for the same book.
[3]: Which is nice, cause that way there are two possible sources of income for the project.
I want to write voip plugins using a modern tool chain and benefit from the wider crate eco system
https://apu.software/truegain/
Then it’s on to the next project.
Funny thing is, the advisor started to tell me to sell last week, and so I did. Then last Friday happened. Interesting.
https://jiffylabs.ai/
It's a browser extension right now and the platform integrates with SSO providers and AI APIs, to help discover shadow AI, enforce policies and creates audit trails. Think observability for AI adoption but also Grammerly since we help coach endusers to better behavior/outcomes.
Early days but the problem is real, have a few design partners in the F500 already
https://justschedule.me
Take a picture of an event flyer or paste in some text. The event gets added to your calendar.
a tool to help California home owners to lower their property taxes. This works for people who bought in the past years low interest environment and are overpaying in taxes because of that.
Feel free to email me, if you have questions: phl.berner@gmail.com
It's a few things:
- very fast Japanese->English dictionary
- hiragana / katakana / number / time reading quizzes
- vocabulary quizzes based on wordlists you define and build
- learn and practice kanji anki-style (using FSRS algo)
- the coolest feature (imo) is a "reader": upload Japanese texts (light novels, children's books, etc), then translate them to your native language to practice your reading comprehension. Select text anywhere on the page (with your cursor) to instantly do a dictionary lookup. A LLM evaluates your translation accuracy (0..100%) and suggests other possible interpretations.
I just revamped the UI look and feel the other day after implementing some other user feedback! I'm now exploring ads as a way to monetize it.
https://github.com/grigio/llm-eval-simple
It is a simple NPM package that generates colorful avatars from input data to aid in quick visual verification.
I would like to see it adopted as a standard.
Still working on growing the audience.
https://gitpushups.com
It's an all-in-one toolkit with one-click version switching, automatic HTTPS for local domains, and an integrated mail catcher.
I've just rolled out some major updates: 1. Local AI Deployment: Now can run models like Llama 3 & Code Llama directly within ServBay. 2. Built-in Tunneling: Share the local sites with anyone on the internet, ngrok-style or frp or Cloudflare. 3. Windows is Live! The new Windows version is out and quickly reaching feature parity with our macOS app.
Next up is ServBay 2.0. I'm currently gathering feedback on features like deeper Docker integration and more flexible site configurations. I'd love to hear what the HN community thinks is important.
Check it out at: https://www.servbay.com
https://ftocks.com
Next in the plans is adding more models and compare which one gives better results.
There are some Amish people who rebuild Dewalt, Milwaukee etc battery packs. I'd like a repairable/sustainable platform where I can actually check the health of the battery packs and replace worn out cells as needed.
To give you an idea of the market, original batteries are about $149, and their knockoffs are around $100.
Battery-powered hand tools are heavier, clumsier, generally of lower quality, less power and are less long-lived than AC-powered tools.
To be honest, there's a little Amish in me: I have hand-powered tools as backup for all my AC tools.
I've been wondering for a while if the display on ebikes could also be a more open and durable part of it.
Interpret your bloodwork for free with a precision of a longevity clinic. You can calculate your biological age based on the best bioage calculators.
https://outerweb.org/explore
And an agentic news digest service which scrapes a few sources (like HackerNews) for technical news and create a daily digest, which you can instruct and skew with words.
A simple document translator that preserves your file's formatting and layout.
In 2nd stage, I will mathematically establish the best course of action as an individual given the base theory.
In 3rd stage, I will explain common psychological phenomenon through the theory, things like narcissism, anxiety, self-doubt, how to forgive others, etc.
In 4th stage, I will explain how the theory is the fastest way to learn across multiple domains and anyone can become a generalist and critical thinker.
In 5th stage, I will explain how society will unfold if everyone can become generalist and critical thinker through the theory. And how this is the next big societal breakthrough like Industrial revolution.
In 6th and last stage, I will think about how to use this theory to make India the next superpower, as this theory can give us the demographic advantage.
Shared more about the algorithm here https://x.com/admiralrohan/status/1973312855114998185
right now, it’s a better way to showcase your really specific industry skills and portfolio of 3D assets (i.e., “LinkedIn for VR/XR) with hiring layered on
starting to add onto the current perf analysis tools and think more about how to get to a “lovable for VR/XR”
Working on a plugin for langfuse to create evals functions and dataset from ingested traces automatically, based on ad-hoc user feedback.
The core idea is to make progression easier to track and follow. After a workout, it analyzes your performance (weight, reps, and RIR), highlights new personal records (PRs), and generates specific targets for your next session. It also reviews your entire program to provide scientific analysis on weekly volume, frequency, and recovery for each muscle group. This gets displayed visually on an anatomy model to help you learn which muscles are involved, and you can track your gains over time with historical performance charts for each exercise.
During a workout, you get a total session timer, an automatic rest timer, and can see your performance from the last session for a clear target to beat. It automatically advances to the next incomplete exercise, and when you need to swap an exercise, it provides context-aware alternatives targeting the same muscles.
It's also deeply customizable:
- The UI has a dark theme, supports multiple languages (English, Spanish, German), lets you adjust the UI scale, and toggle the visibility of detailed muscle names, exercise types, historical performance badges, and a full history card. - You can set global defaults for weight units (kg/lbs), rest times, and plan targets, or enable/disable metrics like Reps in Reserve (RIR) and estimated 1-Rep Max. The exercise library can be filtered by your available equipment, you can create your own custom exercises with global notes, and there's a built-in weight plate calculator. - The progression system lets you define default rep ranges and RIR targets, or create specific overrides for different lifts (e.g., a 3-5 rep range for strength, 10-15 for accessories). - Editing is flexible: you can drag-and-drop to reorder days, exercises, and sets, duplicate workout days, track unilateral exercises (left/right side), and enter data with a quick wheel picker.
https://ILikeAccounting.com
To provide trading insights for users.
YouTube's algorithm is all about engagement - more video game videos, more brainrot, their algorithm doesn't care about the content as long as the kid is watching.
My system allows parents to define their children's interests (e.g., a 12-year-old who enjoys DIY engineering projects, Neil deGrasse Tyson, and drawing fantasy figures)
.. and specify how the AI should filter video candidates (e.g., excluding YouTube Shorts).
Periodically, the system prompts the child with something like
"Tell me about your favorite family vacation."
And their response to that prompt provides the system with more ideas and interests to suggest videos to them.
email me if you'd like to test jim.jones1@gmail.com
https://www.PAGE.YOGA - Link sharing website
https://www.GamesNotToPlay.com - A couple video games
https://www.ce0.ai - CEO Replacement
https://www.CellularSoup.com - Cellular Automata
https://www.fuck.investments - putting together a fine art gallery
iOS/Mac app for learning Japanese by reading, all in one solution with optional Anki integration
I went full-time on this a couple years ago. I’m now doing a full iOS 26 redesign, just added kanji drawing, and am almost done adding a manga mode via Mokuro. I’m also preparing influencer UGC campaigns as I haven’t marketed it basically at all yet.
Truly very impressive.
Throwing in mine. I've been working on solo deving godot games in the last year.
Working on yet another gambling roguelike.
https://store.steampowered.com/app/3839000/Golden_Gambit
I have an artist contacted to do my real assets now.
If anyone is practiced in game balance please reach out if you want to help!
Basically, an agentic platform for working with rich text documents.
I’ve been building this solo since May and having so much fun with it. I created a canvas renderer and all of the word processor interactions from scratch so I can have maximum control over how things are display when it comes to features like AI suggestions and other more novel features I have planned for the future.
Kind of have been wasting time with Cloudflare workers engine. Trying to build a system that schedules these workers for a lightweight alternative to GitHub actions. If you are interested in WASM feel free to reach out. Looking to connect with other developers working on the WASM space.
man, myself needs work
Last month was an improvement. This month I can't concentrate for long and I distract very easily, but I seem to be able to do more with what I have, A small sense of ambition that I might be able to do bigger things, and might not need to drop out of tech and get a simple job, is returning.
I am trying to use this inhibited, fractured state to clarify thoughts about useless technology and distractions, and about what really matters, because (without wishing to sound haughty) I used to be unusually good at a lot of tech stuff, and now I am not. It is sobering but it is also an insight into what it might be like to be on the outside of technology bullshit, looking in.
[0] https://news.ycombinator.com/item?id=45424854
AI sprite animator for 2D video games.
Not earth shattering, but something that should exist.
I am currently developing a web app consisting of a spring/kotlin backend for an angular frontend that is meant to provide a UI for kubectl. It has oAuth login and allows you to store several kubernetes configs, select which one to use and makes it unnecessary to remember all the kubectl commands I can never remember.
It's what I'd like to have if I had to interact with a kubernetes cluster at work. Yes, I know there are several kubernetes UIs already, but remember, this is for 1) learning and 2) following through and completing a project at least somewhat.
I have been trying to study Chinese on my own for a while now and found it very frustrating to spend half the time just looking for simple content to read and listen to. Apps and websites exist, but they usually only have very little content or they ramp up the difficulty too quickly.
Now that LLMs and TTS are quite good I wanted to try it out for languages learning. The goal is to create a vast number of short AI-generated stories to bridge the gap between knowing a few characters and reading real content in Chinese.
Curious to see if it is possible to automatically create stories which are comfortable to read for beginners, or if they sound too much like AI-slop.
Still reducing design costs of a micro positing stage for hobbyists. I observed the driver motion was mostly synchronous and symmetric... Accordingly, given the scale only a single multiplexed piezoelectric actuator motor driver was actually needed, and cut that part of the design cost by 75%.
Still designing various test platforms to validate other key technologies. Sorry, no spoilers =3
Essentially like yeoman back then, to bootstrap your webapp and all the necessary files more easily.
Currently I am somewhat stuck because of Go's type system, as the UI components require a specific interface for the Dataset or Data/Record entries.
For example, a Pie chart would require a map[string]number which could be a float, percentage string or an integer.
A Line chart would require a slice of map[string]number, where each slice index would represent a step in the timeline.
A table would require a slice of map[string]any where each slice index would represent a step in the culling, but the data types would require a custom rendering method or Stringifier(?) of sorts attached to the data type. So that it's possible to serialize or deserialize the properties (e.g. yes/no in the UI meaning true/false, etc).
As I want to provide UI components that can use whatever struct the developer provides, the Go way would be to use an interface. But that would imply that all data type structs on the backend side would have this type of clutter on them attached.
No idea if something like a Parser and Stringifier method definition would make more sense for the UI components here...or whether or not it's better to have something like a Render method attached per component that does all the stringifying on a per-property basis like a "func(dataset any, index int, column string) string" where the developer needs to do all the typecasting manually.
Manual typecasting like this would be pretty painful as components then cannot exist in pure HTML serialized form, which is essentially the core value proposition of my whole UI components framework.
An alternative would be offering a marshal/unmarshal API similar to how JSON does it, but that would require the reflect package which bloats up the runtime binary by several MB and wouldn't be tinygo compatible, so I heavily would wanna avoid that.
Currently looking for other libraries and best practices, as this issue is really bugging me a lot in the app I'm currently building [3] and it's a pretty annoying type system problem.
Feedback as to how it's solved in other frameworks or languages would be appreciated. Maybe there's an architectural convention I'm not aware of that could solve this.
[1] https://github.com/cookiengineer/gooey-cli
[2] https://github.com/cookiengineer/gooey
[3] https://github.com/cookiengineer/git-evac
OpenRun allows defining your web app configuration in a declarative config using Starlark (which is like a subset of Python). Setting up a full GitOps workflow is just one command:
This will set up a scheduled sync, which will look for new apps in the config and create them. It will also apply any config updates on existing apps and reload apps with the latest source code. After this, no further CLI operations are required, all updates are done declaratively. For containerized apps, OpenRun will directly talk to Docker/Podman to manage the container build and startup. There are lots of tools which simplify web app deployment. Most of them use a UI driven approach or an imperative CLI approach. That makes it difficult to recreate an environment. Managing these tools when multiple people need to coordinate changes is also difficult.Any repo which has a Dockerfile can be deployed directly. For frameworks like Streamlit/Gradio/FastHTML/Shiny/Reflex/Flask/FastAPI, OpenRun supports zero-config deployments, there is no need to even have a Dockerfile. Domain based deployment is supported for all apps. Path based deployment is also supported for most frameworks, which makes DNS routing and certificate management easier.
OpenRun currently runs on a single machine with an embedded SQLite database or on multiple machines with an external Postgres database. I plan to support OpenRun as a service on top of Kubernetes, to support auto-scaling. OpenRun implements its own web server, instead of using Traefik/Nginx. That makes it possible to implement features like scaling down to zero and RBAC. The goal with OpenRun is to support declarative deployment for web apps while removing the complexity of maintaining multiple YAML config files. See https://github.com/openrundev/openrun/blob/main/examples/uti... for an example config, each app is just one or two lines of config.
OpenRun makes it easy to set up OAuth/OIDC/SAML based auth, with RBAC. See https://openrun.dev/docs/use-cases/ for a couple of use cases examples: sharing apps with family and sharing across a team. Outside of managed services, I have found it difficult to implement this type of RBAC with any other open source solution.
The idea is that a beginner should be able to wire up a personally useful agent (like a file-finder for your computer) in ten minutes by writing a simple prompt, some simple tools, and running it. Easy to plugin any kind of tracing, etc you want. Have three or four projects in prod which I'll be switching to use it just to make sure it fits all those use-cases.
But I want to be able to go from someone saying "can we build an agent to" to having the PoC done in a few minutes. Everything else I've looked at so far seems limited, or complicated, or insufficiently hackable for niche use-cases. Or, worse of all, in Python.
https://tac.ooo/historic
-----
COCKTAIL-DKG - A distributed key generation protocol for FROST, based on ChillDKG (but generalized to more elliptic curve groups) -- https://github.com/C2SP/C2SP/pull/164 | https://github.com/C2SP/C2SP/issues/159
-----
A tool for threshold signing software releases that I eventually want to integrate with SigStore, etc. to help folks distribute their code-signing. https://github.com/soatok/freeon
-----
Want E2EE for Mastodon (and other ActivityPub-based software), so you can have encrypted Fediverse DMs? I've been working on the public key transparency aspect of this too.
Spec: https://github.com/fedi-e2ee/public-key-directory-specificat...
Implementation: Coming soon. The empty repository is https://github.com/fedi-e2ee/pkd-server-go but I'll be pushing code in the near future.
You can read more about this project here: https://soatok.blog/category/technology/open-source/fedivers...
It's an AI-webapp builder with a twist: I proxy all OpenAI API calls your webapp makes and charge 2x the token rate; so when you publish your webapp onto a subdomain, the users who use your webapp will be charged 2x on their token usage. Then you, the webapp creator, gets 80% of what's left over after I pay OpenAI (and I get 20%).
It's also a fun project because I'm making code changes a different way than most people are: I'm having the LLM write AST modification code; My site immediately runs the code spit out by the LLM in order to make the changes you requested in a ticket. I blogged about how this works here: https://codeplusequalsai.com/static/blog/prompting_llms_to_m...
This is a free license plate tracking game for families on road trips. Currently adding more OAuth providers, and some time zone features.
What I'm building at the moment is a server monitoring solution for STUN, TURN, MQTT, and NTP servers. I wanted to allow the software for this to be portable. So I wrote a simple work queue myself. Python doesn't have linked-lists which is the data structure I'm using for the queues. They allow for O(1) deletes which you can't really get on many Python data structures. Important for work items when you're moving work between queues.
For the actual workers I keep things very simple. I make like 100 independent Python processes each with an event loop. This uses up a crap load of memory but the advantage is that you can parallel execution without any complexity. It would be extremely complex trying to do that with code alone and asyncio's event loop doesn't play well with parallelism. So you really only want one per process.
Result: simple, portable Python code that can easily manage monitoring hundreds of servers (sorry didnt mean for that to sound like chatgpt, lmao, incidental.) The DB for this is memory-based to avoid locking issues. I did use sqlite at first but even with optimizations there were locking issues. Now, I only use sqlite for import / export (checksums.)
Not anything special by HN standards but work is here: https://github.com/robertsdotpm/p2pd_server_monitor
I'm at the stage now where I'm adding all the servers to monitor to it. So fun times.
## AI-Related Projects
* *[justinc8687] Migraine Tracker:* This project aims to help users track their migraines using voice input, with the goal of analyzing unstructured data with AI to find root causes. It uses Deepgram for transcription and an LLM for analysis, with a "chat with your data" feature. * *[dcheong] User Mastery:* A platform for product teams to manage updates, changelogs, roadmaps, documentation, and feedback, utilizing AI to assist. * *[jared_stewart] Survey Response Automation:* Using LLMs to automate the processing of parent survey responses for a school, aiming for consistent summarization and statistics. * *[codybontecou] Voice-Script:* A tool that allows users to discuss and generate GitHub issues, pull requests, and code diffs using ChatGPT's voice agents. * *[conditionnumber] LLM for Data Matching:* Proposes using an LLM to score and match candidates identified by a tool like "jellyjoin," reducing a large number of potential matches to a manageable set for AI analysis. * *[taherchhabra] Infinite Canvas for AI Generation:* A platform for AI image, video, audio, and 3D generation, designed to help create cohesive stories with consistent characters and locations. * *[chipotle_coyote] Story Theory Program (Spiritual Successor to Dramatica):* Aims to create a story theory and brainstorming program, drawing inspiration from Dramatica but incorporating modern concepts, and potentially using AI for some aspects. * *[rhl314] Magnetron (Whiteboard Explainers):* An AI-powered tool that generates whiteboard explainer videos from prompts or documents, using AI for design, animations, and voiceovers. * *[adamsaparudin] AI SaaS Workflow:* A project focused on enabling users to launch their own AI SaaS applications quickly, abstracting away complexities like user management and billing. * *[garbage] Dreamly.in (AI Bedtime Stories):* An automated, personalized, and localized bedtime story generator for children, using AI to create stories based on child profiles and themes. * *[nowittyusername] Metacognitive AI System:* This project focuses on creating an AI agent with multiple specialized LLMs that can reason, analyze, and communicate internally to provide more sophisticated responses to humans, rather than just acting as a simple chatbot. * *[fjulian] Veila (Privacy-First AI Chat):* A privacy-focused AI chat service that uses a proxy to prevent user profiling and offers end-to-end encrypted history, allowing users to switch models mid-chat. * *[ai-christianson] Gobii Platform (Open-Source AI Employees):* Browser-based AI agents that can log into real websites, fill out forms, and adapt to changes, functioning as "browser-use cloud" employees. * *[apf6] Dev Tools for MCP Servers:* Building libraries to help write tests for MCP (Model-Centric Programming) servers, focusing on AI-related development. * *[mfrye0] Plaid/Perplexity for Business Data:* Creating composable deep research APIs powered by a business graph and web search index to integrate business data into applications and AI agent processes. * *[vishakh82] Monadic DNA Explorer:* A tool to explore genetic traits from GWAS Catalog and user DNA data, with AI insights run locally in a TEE (Trusted Execution Environment). * *[jerrygoyal] JetWriter.ai:* A Chrome extension that uses AI to assist with tasks on any website, such as chatting with pages, fixing grammar, replying to emails, translating, and summarizing. * *[chadwittman] Eldrick.golf (AI Golf Club Fitter):* An AI-powered golf club fitting tool that aims to rival human professionals in custom club fitting. * *[jiffylabs] AI Governance and Security Platform:* A platform and browser extension to provide visibility into AI tool usage within organizations, discover shadow AI, enforce policies, and create audit trails. It also acts as a coach for end-users. * *[aantix] Alternative YouTube App for Kids:* An app that uses AI to filter YouTube videos based on parental-defined interests and prompts children for input to discover new interests, moving away from engagement-driven algorithms. * *[qwikhost] Video AI Editor:* A tool for editing videos using AI. * *[accountisha] CPA Exam Prep Tool:* A system that generates word problems and step-by-step solutions to help individuals prepare for the American CPA exams. * *[felixding] Kintoun.ai:* A simple document translator that preserves file formatting and layout, likely using AI for translation. * *[skyfantom] LLM + Stocks Market Analysis:* Experimenting with LLMs for stock market analysis and comparing different models for their effectiveness. * *[braheus] English-to-Function Definition (LLM):* A library that allows defining functions in English using an LLM, which can then be used like regular TypeScript functions, enabling agentic orchestration. * *[gametorch] AI Sprite Animator:* An AI-powered tool for animating sprites in 2D video games. * *[sab_hn] Endless Chinese:* An AI-generated story platform for learning Chinese, aiming to create a vast number of short stories for beginners. * *[asdev] FleetCode (Coding Agent Control Panel):* An open-source control panel for running coding agents in parallel. * *[trogdor] AI Document Summarization/Analysis:* A tool that uses AI to analyze documents and provide summaries, potentially for research or other forms of content consumption. * *[osint.moe] LLM-Powered OSINT Helper:* An app that uses LLMs to build an interactive research graph for Open Source Intelligence (OSINT) gathering. * *[kintoun.ai] Document Translator:* A tool that translates documents while preserving formatting and layout, likely leveraging AI. * *[mclaren] AI-powered code generation and analysis tools.* * *[skanga] Conductor (LLM-Agnostic Framework):* A framework for building sophisticated AI applications using a subagent architecture, inspired by concepts of "The Rise of Subagents." * *[ashdnazg] Palindrome Finding (CUDA):* Porting code to CUDA to find palindromes, with a focus on GPU optimization and exploring new elements in number series. * *[veesahni] AI in Customer Communications:* Exploring effective, hype-free usage of AI in customer communications. * *[cryptoz] Code+=AI (AI Webapp Builder):* A platform for building AI web apps where API calls are proxied, and users are charged for token usage, with creators earning a percentage of the revenue. The LLM is also used to modify code. * *[exasperaited] Recovering from Cognitive Impairment:* Using AI tools to help clarify thoughts and potentially recover cognitive abilities lost due to a past event. * *[waxycaps] CEO Replacement:* A project related to AI that has the goal of replacing a CEO. * *[vladoh] Simple Photo Gallery (V2):* While not AI-specific, the mention of a future SaaS offer for users who don't want to self-host suggests potential for AI-driven features in the future. * *[dheera] Invoice Generators for "Inconvenience Fees":* While not directly AI, the idea of invoicing for "inconvenience fees" could be an interesting application for AI to determine and quantify such fees. * *[yomismoaqui] HN Post/Comment Analyzer:* A website for analyzing posts and comments on Hacker News, potentially using AI to filter or summarize content. * *[ce0.ai] CEO Replacement:* A project explicitly stating it's about replacing a CEO with AI. * *[robinsloan] Home-cooked App Essay Inspiration:* While not directly an AI project, the mention of this essay and the focus on personal apps could lead to AI-integrated personal tools. * *[zuhayeer] Levels.fyi Calculator Revamp:* Focusing on improving a calculator page for refreshers and stock growth, which could involve AI for analysis or predictions. * *[lukehan] AI Data Enrichment Platform:* A platform to help users enrich their data so AI, like ChatGPT, can understand it better, measured by an "AI Understanding Score." * *[asimovDev] Sound Blaster Command Control:* While primarily reverse engineering, the mention of "creative's multiplatform solutions" could imply future AI integration for smarter control. * *[daveevad] "Myself, myself needs work":* This self-reflection could involve AI tools for personal development or understanding oneself better. * *[thenipper] Campaign Management App for TTRPGs:* While primarily a wiki-like app, the potential for AI to assist in game mastering or content generation is present. * *
It works by specializing for the common case of read-only workloads and short, fixed-length keys/includes (int, uuid, text<=32b, numeric, money, etc - not json) and (optionally) repetitive key-values (a common case with short fixed-length keys). These kinds of indexes/tables are found in nearly every database for lookups, many-many JOIN relationships, materialized views of popular statistics, etc.
Currently, it's "starting to work" with 100% code coverage and performance that usually matches/beats btree in query speed. Due to compression, it can consume as little as 99.95% less memory (!) and associated "pressure" on cache/ram/IO. Of course, there are degenerate cases (e.g. all unique UUID, many INCLUDEs, etc) where it's about the same size. As with all indexes, performance is limited by the PostgreSQL executor's interface which is record-at-a-time with high overhead records. I'd love help coding a FDW which allows aggregates (e.g. count()) to be "pushed down" and executed in still requires returning every record instead of a single final answer. OTT help would be a FDW interface where substantial query plans could be "pushed down" e.g. COUNT().
The plan is to publish and open source this work.
I'd welcome collaborators and have lots of experience working on small teams at major companies. I'm based in NYC but remote is fine.
- must be willing to work with LLMs and not "cheat" by hand-writing code.
- Usage testing: must be comfortable with PostgreSQL and indexes. No other experience required!
- Benchmarking, must know SQL indexes and have benchmarking experience - no pgsql internals required.
- For internals work, must know C and SQL. PostgreSQL is tricky to learn but LLMs are excellent teachers!
- Scripting code is in bash, python and Makefile, but again this is all vibe coded and you can ask LLMs what it's doing.
- any environment is fine. I'm using linux/docker (multi-core x86 and arm) but would love help with Windows, native MacOS and SIMD optimization.
- I'm open to porting/moving to Rust, especially if that provides a faster path to restricted environments like AWS RDS/Aurora.
- your ideas welcome! but obviously, we'll need to divide and conquer since the LLMs are making rapid changes to the core and we'll have to deal with code conflicts.
DM to learn more (see my HN profile)
Explain