MarkusWandel3 days ago
I always say "on a scale from no canoe to a $5K canoe, even the crappiest canoe is 80% of the way there". This camera illustrates that for vision. When you hear about those visual implants that give you, say, 16x16 grayscale you think that's nothing. Yet 30x30 grayscale as seen in this video, especially with live updates and not just a still frame is... vision. Not 80% of the way there, but does punch way above its weight class in terms of usefulness.
SwtCyber8 hours ago
The moment you add motion and temporal continuity, even a postage-stamp image turns into something your brain can work with
MarkusWandel7 hours ago
The brain really is quite a machine. I've personally had a retinal tear lasered. It's well within my peripheral vision, and the lasering of course did more damage (but prevents it from spreading). How much of this can I see? Nothing! My peripheral vision appears continuous. Probably I'd miss a motion event only visible to that eye only in that particular zone. Not to mention the enormous number of "floaters" one gets especially by my age (58). Sometimes you see them but for the most part the brain just filters them out.

Where this becomes relevant is when you consider depixellation. True blur can't be undone, but pixellation without appropriate antialiasing filtering...

https://www.youtube.com/watch?v=acKYYwcxpGk

So if your 30x30 camera has sharp square pixels with no antialiasing filter in front of the sensor, I'll bet the brain would soon learn to "run that depixellation algorithm" and just by natural motion of the camera, learn to recognize finer detail. Of course that still means training the brain to recognize 900 electrodes, which is beyond the current state of the art (but 16x16 pixels aren't and the same principle can apply there).

jacquesm5 hours ago
It would be interesting to see how far you could push that. I bet just two scanlines side-by-side would be enough for complete imaging. Maybe even just one, but that would require a lot more pre-processing and much finer control over the angle of movement. Congrats on the positive outcome of that surgery, that must have been pretty scary.
dehrmann2 hours ago
In some situations, you can trade resolution for frequency and maintain quality. 1-bit audio is a thing:

https://en.wikipedia.org/wiki/Direct_Stream_Digital

dhosek1 hour ago
Back in the Apple ][ days, the timing for writing to the cassette port and to the speaker were identical (just poking different addresses), so you could load audio from cassette using the ROM routines to read a program from tape, then use a modified routine that wrote to the speaker address instead of the cassette port address to play back the audio at 1-bit quality. It kind of sounded like something recorded off AM radio.

I also remember a lot of experimenting with timing to try to get a simulation of polyphonic sound by trying to toggle the speaker at the zeros of sin + sin .

lillecarl16 hours ago
Diminishing returns explained through canoes :)

16x16 sounds really shit for me who still has perfect vision indeed but I bet it's life changing to be able to identify presence / absence of stuff around you and such! Yay for technology!

ACCount3714 hours ago
This kind of thing is really held back by BCI tech.

By now, we have smartphones with camera systems that beat human eyes, and SoCs powerful enough to perform whatever image processing you want them to, in real time.

But our best neural interfaces have the throughput close to that of a dial-up modem, and questionable longevity. Other technological blockers advanced in leaps and bounds, but SOTA on BCI today is not that far away from 20 years ago. Because medicine is where innovation goes to die.

It's why I'm excited for the new generation of BCIs like Neuralink. For now, they're mostly replicating the old capabilities, but with better fundamentals. But once the fundamentals - interface longevity, ease of installation, ease of adaptation - are there? We might actually get more capable, more scalable BCIs.

SiempreViernes13 hours ago
> Because medicine is where "move fast and break things" means people immediately die.

Fixed the typo for you.

ACCount3712 hours ago
Not moving fast and not breaking things means people die slowly and excruciatingly. Because the solutions for their medical issues were not developed in time.

Inaction has a price, you know.

chmod7751 hour ago
The majority of treatments people ever thought up and think up today are somewhere between useless and terrible ideas. Whatever you think is looking so exciting right now, there have been a million other things that looked just as exciting before. They do not anymore.

We didn't come up with these rules around medical treatments out of nowhere, humanity has learned them through painful lessons.

The medical field used to operate very differently and I do not want to go back to those times.

omnicognate11 hours ago
It has a price for the person with the condition. For the person developing the cure it does not (except perhaps opportunity cost, money not made that could have been), whereas killing their patients can have an extremely high one.
jama2116 hours ago
You’re starting to sound terrifyingly like an unethical scientist. We know how that ends, we’ve been down that road before, and we know why it is a terrible idea.
rogerrogerr3 hours ago
There is a lot of space between “persons with a debilitating condition are prohibited from choosing to take a risky treatment that might help” and “hey let’s feed these prisoners smallpox-laced cheese for a week and see what happens”.

The “no harm, ever” crowd does not have a monopoly on ethics.

arcanemachiner14 hours ago
To anyone wondering:

BCI == Brain-computer interface

https://en.wikipedia.org/wiki/Brain–computer_interface

Lapsa14 hours ago
mind reading technology has already arrived. radiomyography & neural networks deciphering EEGs
ACCount3713 hours ago
Not really. Non-invasive interfaces don't have the resolution. Can't make an omelet without cracking open a few skulls.
Lapsa10 hours ago
they do read my mind at least to some extent -> "The paper concludes that it is possible to detect changes in the thickness and the properties of the muscle solely by evaluating the reflection coefficient of an antenna structure." https://ieeexplore.ieee.org/document/6711930
metalman14 hours ago
it is a good ilustration of something like moores law, for a comming end point where a hand held device will have more than enough cabability and capacity to do ANYTHING, a meer mortal will EVER require, and the death of options and features, and a return to personal autonomy and responsibility

AI is the final failure of "intermitent" wipers,which like my latest car, is irevocably enabled to smeer the road grime and imperceptable "rain" into a goo, blocking by ability to see

makeitdouble12 hours ago
True. Then we cross a threshold where things that weren't even thought as possible become reachable, and we're back on the treadmill.

That's what we're having with VR: we came to a point where increasing DPI for laptop or phone seemed to make no sense; but that was also the point where VR started to be reachable, and over there a 300DPI screen is crude and we'd really want 3x that pixel density.

immibis13 hours ago
use the washer button to spray the windshield with water and help the goo wipe off
metalman7 hours ago
yes, obviously, but my point is that I am now tasked with helping the "feature" limp along, whenever it lurches, unexpectedly, into action, therby ADDING distraction which if you read the ancient myths and legends is one of the main methods that evil spirits and deamons undermine and defeat the unwary....and lull them into becoming possesed, hosts, for said entities.

who's working for who here anyway?

already?

mdtrooper13 hours ago
These kind of news are for me the real news for this website instead of a new fancy tech product of Apple or similar corporation.

Sincerely a lot of thanks.

bbeonx3 hours ago
agreed...i think it's fine to keep up with what the corporate world is doing, but these projects bring me real joy
SwtCyber8 hours ago
Corporate launches are predictable and polished; projects like this are the opposite
anotherpaul16 hours ago
zamadatix11 hours ago
One of the comments from the creator answered one of my questions https://www.reddit.com/r/3Dprinting/comments/1olyzn6/comment...:

> Do you mean the refresh rate should be higher? There's two things limiting that: > - The sensor isn't optimized for actually reading out images, normally it just does internal processing and spits out motion data (which is at high speed). You can only read images at about 90Hz > - Writing to the screen is slow because it doesn't support super high clock speeds. Drawing a 3x scale image (90x90 pixels) plus reading from the sensor, I can get about 20Hz, and a 1x scale image (30x30 pixels) I can get 50Hz.

I figured there would be limitations around the second, but I was hoping the former wasn't such a big limit.

consumer45110 hours ago
gsf_emergency_615 hours ago
Compressed sensing! What Terence Tao uses to sell math funding!!

https://www.youtube.com/watch?v=EE9AETSoPHw&t=44

https://www.instructables.com/Single-Pixel-Camera-Using-an-L...

(Okay not the same guy, but I wanted to share this somewhat related "extreme" camera project)

fph13 hours ago
Is this compressed sensing though? The description says "Sensor 30x30 pixels, 64 colors (ADNS-3090 if you wanna look it up)", so definitely not a single-pixel camera.
gsf_emergency_611 hours ago
Sorry to get you confused. This is a different setup. TFA uses the 30x30 and no compressed sensing. The link above uses a single photo detector. They also use an LED matrix, but that's to make the _image_ (I think)
HPsquared12 hours ago
I wonder how much quality you could get out of that sensor.
shit_game10 hours ago
Very cool project. I love the detail the poster went into in their linked video post about working with the sensor and their implementation.

> Optical computer mice work by detecting movement with a photoelectric cell (or sensor) and a light. The light is emitted downward, striking a desk or mousepad, and then reflecting to the sensor. The sensor has a lens to help direct the reflected light, enabling the mouse to convert precise physical movement into an input for the computer’s on-screen cursor. The way the reflected changes in response to movement is translated into cursor movement values.

I can't tell if this grammatical error is a result of nonchalant editing and a lack of proofreading or a person touching-up LLM content.

> It’s a clever solution for a fundamental computer problem: how to control the cursor. For most computer users, that’s fine, and they can happily use their mouse and go about their day. But when Dycus came across a PCB from an old optical mouse, which they had saved because they knew it was possible to read images from an optical mouse sensor, the itch to build a mouse-based camera was too much to ignore.

Ah, it's an LLM. Dogshit grifter article. Honestly, the HN link should be changed to the reddit post.

foxglacier3 hours ago
LLM or not doesn't matter as much as it's just bad reader-hostile writing with a dump of trivial details while also glossing over the relevant part (how does a mouse detect movement).
kachapopopow4 hours ago
Waiting until someone builds a high speed camera using mouse sensors.
jacquesm5 hours ago
These are 'optical flow' chips. They are quite interesting for many reasons.
supportengineer7 hours ago
This is fantastic. What an amazing project! There is a certain segment of photography enthusiasts who love the aesthetic.
JKCalhoun10 hours ago
"I made a camera from an optical mouse. 30x30 pixels in 64 glorious shades of gray!"

I wonder why so many shades of grey? Fancy!

(Yeah, the U.K. spelling of "grey" looks more "gray" to these American eyes.)

Hilarious too that this article is on Petapixel. (Centipixel?)

jan_Sate11 hours ago
Impressive. That's what I read HN for!
foxglacier15 hours ago
I have to say, the Game Boy camera doesn't have only 4 colors. It has an analog output you can connect to your own ADC with more bits and get more shades of gray. I even managed to get color pictures out of it by swapping color filters and combining the images.
eugene330611 hours ago
Just in case the author is here: what's the FPS?
madars8 hours ago
On Reddit, author says "The preview is shown at 20fps for a 3x scale image (90x90 pixels) and 50fps for a 1x scale image. This is due to the time it takes to read the image data from the sensor (~10ms) and the max write speed of the display.", and adds that optical mice motion tracking goes to 6400 fps for this sensor but you can't actually transmit image at that rate.

https://old.reddit.com/r/electronics/comments/1olyu7r/i_made...

SwtCyber8 hours ago
What I love most is that it takes something we all interact with every day
ck25 hours ago
vaguely related but exponentially more impressive

camera the size of a grain of rice with 320x320 resolution

https://ams-osram.com/products/sensor-solutions/cmos-image-s...

buildbot5 hours ago
Woah, “wafer level optics” sounds really fancy

https://www.mouser.com/datasheet/3/5912/1/NanEyeC_DS000503_5...