Hey everyone, I’m the astrophotographer, but I’m not OP. I’m assuming OP picked up my article and posted here and that’s ok! So I quickly created an account here to comment.
Having a quick read through the comments I just want to say thank you for the kind words! Please follow my IG (https://www.instagram.com/deepskyjourney) to see more of my photography, and the reddit article if you want to drop a comment with any questions :)
Hi Rod, the images you have on your gallery and instagram are stunning but very low-resolution (unless I'm missing something). You mention in the article about preparing IMAX-ready photographs. Is there a way to download those full-res versions of your images?
> Is there a way to download those full-res versions of your images?
Maybe because HN is usually geared towards "programmers" rather than "artists", but asking for a (free?) download of full-res versions of a photographers photos is a bit like asking a developer who is publishing commercial desktop software for the source code of the program :)
Maybe at least ask to be able to pay for a high resolution version (not just printed), I know I'd be interested in that too!
Also a fellow astrophotography enthusiast here! I love to photograph deep sky objects in the context of their landscapes. There is a lot of math, stacking, tracking, and denoising in the process, but I keep every image very real as what you would see if your eyes were a lot more sensitive. A lot of people don't realize how big some objects are in the night sky -- for example, Barnard's Loop is as large as about half the entire constellation of Orion, and Andromeda appears 6 times the size of the moon. We just don't see them because many of these objects are very dim -- not small.
I was thrilled to read through to the end of the article and discover a fellow Brisbanite! My friends and I were discussing this movie the other night, they will be stoked to keep an eye out for your images.
I’ve had multiple setups over the last 2 years, but for the images displayed in the movie there were two main setups: a William Optics RedCat 51 II and an Askar 130PHQ, both paired with a ZWO ASI2600MM Pro camera, typically on a Sky Watcher NEQ6 Pro mount, along with narrowband and RGB filters depending on the target.
This is incredible and wonderful news, huge congratulations! As someone who works at the intersection of design and engineering, the detail about delivering "starless versions" so the credit typography doesn't compete with the bright stars is exactly the kind of invisible technical problem-solving I love reading about on here.
On a personal note, I find it very refreshing to hear that a major studio opted for real captured photography. Love that they specifically wanted the authenticity of real narrowband data and that speaks to the production team's vision. Enjoy the premiere night, feel incredibly proud. I was already planning on watching the movie this weekend (it releases here on the 26th) and now I'm doubly excited because I know this neat little tidbit.
I'm pretty sure this "Dad did something crazy" moment is going to be a core memory for your kids. Congrats!
I'm curious how the starless versions are created. From the steps at the end, I couldn't see a 'this is how stars are removed' step. Maybe it's part of stacking (but most stars would remain present?) or the calibration process treating stars as noise?
Traditionally (pre-ai) you would use another image of the same part of the sky and negate the items that you want to remove from the image
As an example terrestrial telescope mirrors get dusty. You're not going to break down the scope just to clean up the dust as this is a many days operation in most cases. So instead you would take "flats" that were of a pure white background and thus showed the dust in its full, dusty, glory. When you take your actual images, you negate (subtract from the original image) the flat and thus any noise generated by the dust. You can use this same method for removing brighter stars from an image that would otherwise saturate the ccd and wash out the background. Turns out it doesn't work for planes. Ask me how I know!
> Traditionally (pre-ai) you would use another image of the same part of the sky and negate the items that you want to remove from the image.
I'm not an astrophotographer, so I'm interested about why that method would work for stars. Are not stars fixed in relation to the images taken? I could see how the technique would work with planets, maybe, but not stars.
Why does the technique not work with aircraft? Because they generally fly on fixed routes?
As the Earth rotates over the course of the night, the background stars and nebulae move as a single unit, no?
Maybe for some close stars parallax might work to remove them over the course of half a year. But no way could the Earth's rotation during a single night move background stars out of a nebulae.
Sure, but the nebulae also move along with the stars. The questions is how one can subtract the stars without also subtracting the nebulae. (I'm assuming different filters and/or a database of known star positions)
The ESA catalog is not precise enough to remove a star from an image of the structure of a nebulae - never mind Hipparcos. Filters while photographing and image processing in post are the way to go.
Don't forget that not only does the star need to be removed, but also the diffraction spikes. Those are internal reflections in the lens assembly - not mapped by any star catalog ))
> So these are more artistic photo works than real science photos...
I disagree. If there are many flies around a statue, and I photograph the statue but remove the flies in the photo (via AI or any other technique), then I'm still producing an image of something that exists in the world - exactly as it appears in the world.
I agree that the claim "no generative AI used" is technically incorrect, but I do feel that the image does not contain any AI-hallucinated content and therefore is an accurate representation of reality. These structures appear in the image exactly as they exist in nature.
It uses machine/deep learning, but it's hard to characterise as "generative AI".
AI-related definitions aside, if it's a strictly subtractive/destructive tool that is removing light, it's arguably not much different to filtering frequencies!
So this part of the blog post is essentially false: "no generative AI of any kind"
I have yet to see a precise technical definition of what "generative AI" means, but StarXTerminator uses a neural network that generates new data to fill in the gaps where non-stellar objects are obscured by stars. And it advertises itself as "AI powered".
I don't consider photos I take on iPhone to be "AI generated" or even "AI augmented" even though iPhone uses neural networks and "AI" to do basic stuff like low light photography, blurring backgrounds, etc.
As a photographer and machine learning guy, I would call a lot of modern phone photos AI augmented. AI to stack photos or figure out what counts as the background is a little bit of a gray area, but an img-to-img CNN is about as close as you can get to full AI generation without a full GAN or diffusion model.
I agree that I wouldn't call these photos "AI generated", because the majority of what you're seeing is real.
But that's very different to saying that no generative AI was used at all in their production. "AI augmented" sounds pretty accurate to me.
Likewise, if someone posted a photo taken with their iPhone where they had used the built-in AI features to (for instance) remove people or objects, and then they claimed that no AI was involved, I would consider that misleading, even if the photo accurately depicts a real scene in other respects.
“StarNet is a neural network that can remove stars from images in one simple step leaving only the background. More technically, it is a convolutional residual net with encoder-decoder architecture and with L1, Adversarial and Perceptual losses.”
I feel like the stars are probably pretty easy to mask out since they’re very bright relative to the rest of the image. Once you have the mask, each one is small enough that you could probably fill it with the values from adjacent pixels. Kinda like sensor mapping to hide dead pixels. That’s just a guess though, I’m sure there’s more to it than that.
Bright stars are so bright they literally mask areas of the sky. You'll probably need deconvolution algorithms (CLEAN being the standard some time ago, don't know whether some AI/deep-inv approach works nowadays...) to remove them.
There are several “AI” deconvolution tools to remove stars which work exceptionally well: two of the most popular ones being StarNet and RC-Astro’s StarXTerminator. I’m willing to bet that the author used the latter for star removal as it’s become something of a standard in the astrophotography world.
Somewhat related, nature photographer/youtuber Danni Connor had her recording of a red squirrel used in the movie Dune (Part 1) for the sound of the desert mouse (muad'dib). Her interviewing with (Oscar-winning) sound designer Mark Mangini on it:
Incredible work, OP. What a proud feeling you must have. Congrats!!
My wife and I saw the movie this weekend, we thought it was great. I adored the book, yet I recognize a book can’t be perfectly translated to the screen.
I thought the directors did a good enough job at translating the sci-fi into something the masses would enjoy.
As a general rule, always read the book first. In this case, that holds true - there was too much in the book to cover completely in the movie. It's a pretty quick read as well - you could probably bang it out in a long afternoon, if you were inclined.
That said, I never read Harry Potter and can't imagine going back and reading it now. So, YMMV.
I don’t think it does here. This has been one of the times where I enjoyed the movie more than the book. I liked the character in the book, in the movie I couldn’t take my eyes off them.
They might be a special case considering the audience, books, author, actors, and the movies themselves grew alongside each other; it’s pretty singular, I think.
Both are wonderful. I thought the movie was an excellent adaptation of the book.
But I am glad I read the book first, I got much more out of it - it goes a lot more in depth into the science and engineering challenges that occur throughout. Which I appreciated. I'm not sure I would have read the book in the same way if I had seen the movie first.
I tend to prefer movies as a storytelling medium, and enjoyed watching the story unfold that way. I ended up just wanting to know more about things that were implied in the movie but not explained, and the book filled in those gaps well.
So if you want to do both, and want to get something new when you do each, then, having done it that way, I would recommend it.
Edit: reviewing my app history, it took me somewhere between 10-11 hours to read the book, and I do not read fiction especially fast.
That’s a tough one. I’d recommend the book first, but I can see arguments for both orders.
By reading the book first, you’ll have a better background and understanding of the context of the plot, the science, and the overall objectives of the mission. There are also several “twists” in the book that were cut from the movie for runtime.
I enjoyed the movie after reading because I got to see the story “come to life”.
But I could also understand the perspective of enjoying the movie first, and then having the story/world expanded 8x with a 16hr book.
You’d could equate “movie -> book” order to watching the LoTR standard editions first, and then watching the extended editions.
I listened to the audiobook narrated by Ray Porter (on Audible) and would recommend that production if you enjoy audio.
I found that I would have enjoyed the movie a bit more if I hadn't read the book, but it was still a solid 8/10. I'm really glad that a movie like this did well in opening weekend.
The book is fantastic, I’d recommend reading it one way or another. ;) Speaking personally, I lose some motivation to read a book after seeing the movie. But book-based movies of course rarely if ever live up to the book. I read first, so I can’t speak to the other way around, but I think I was looking forward to the movie a lot more than I would have if I hadn’t read the book. I also suspect I was more forgiving of the movie than if I’d seen it cold.
Not the parent, but I've seen the movie and read the book. I think there are a few gaps in the movie that's explained by the book, but there are some artistic freedom as well between the book and the film.
I would recommend reading the book first at least.
Amazing! Kudos to Hollywood, for going to this length to license the work, credit the author, involve him in the project. To respect realism as a goal for its own, even though "no one will notice" and a similar image might be "just a prompt away." I know how common is the latter these days.
As more an more companies lazily use AI to achieve the same thing I am doubling down on supporting -- even if I don't really care about the subject -- anything that supports actual, real human art.
About to see the movie in two days, read the book ages ago and remember I wasn't too fond of the book ending either, so now I got a bit more excited :)
Amazing achievement, congratulations! Can't seem to be able to read it though, it greets me with "Sorry, you have been blocked" CloudFlare page — is this a HN overloading the website, or did the host accidentally block IPs from Ukraine perhaps?
The stars were stripped out with neural network tools (StarNet++/StarXTerminator) at the studio request so text credits would read cleanly over them. The underlying nebula data is real, but removing every star from the field puts this firmly in the category of art photography, not scientific imaging.
No one has ever or could ever observe a nebula with zero stars in the frame.
As an amateur astrophotographer, I am both so envious and so happy for you. What a wonderful recognition of your talent and dedication to the craft. Kudos!
That is not what starnet does. It just removes the star from the picture you took, nothing else. It also predates generative AI by a few years.
If by "not real", you mean "you removed the stars so it no longer reflect reality!", then real photograph doesn't exist. For example, OP uses narrow-band filters, and it's common to map H-alpha wavelength, which is red, to green in the images. Does that make it unreal?
In the end, astrophotography is more art than science; the goal is more about producing aesthetically-pleasing images than doing photometry. Photographers must take some artistic license.
“StarNet is a neural network that can remove stars from images in one simple step leaving only the background. More technically, it is a convolutional residual net with encoder-decoder architecture and with L1, Adversarial and Perceptual losses.”
They're a lot more real than CG/AI. It's very rare and maybe not even possible to have a "true" astrophotography photo.
At those light levels, eyes and camera sensors work very differently and even a "plain" astro photo has either been processed a lot, or else doesn't look like what our eyes would see.
Even straight-out-of-the-camera JPG files have been heavily processed - they are just hidden behind the RAW processor which we have taken for granted; not to mention smartphone photographs, which employ neural network in the processing pipeline.
Those shots are stunning. Too bad I rarely pay attention to the credits. I always assumed a lot of effort goes into them though, and this post seems to confirm it.
Everyone do yourselves a favor and skip all trailers and go see this movie. It was a delight start to finish. I was so glad I knew zero what the story was.
Why? I am currently reading the book as well, and I even though I am not a scientist I feel like I am finding small technical/scientific mistakes that shouldn't have had to be there.
Having a quick read through the comments I just want to say thank you for the kind words! Please follow my IG (https://www.instagram.com/deepskyjourney) to see more of my photography, and the reddit article if you want to drop a comment with any questions :)
https://www.reddit.com/r/ProjectHailMary/s/NbRv3sj3fs
Cheers,
Rod Prazeres
Maybe because HN is usually geared towards "programmers" rather than "artists", but asking for a (free?) download of full-res versions of a photographers photos is a bit like asking a developer who is publishing commercial desktop software for the source code of the program :)
Maybe at least ask to be able to pay for a high resolution version (not just printed), I know I'd be interested in that too!
https://www.instagram.com/dheeranet
Jazz hands
My website is also www.rpastro.com.au :)
Very nice shots. It must be a great feeling to see one's own footage in a feature film!
How long do you do astrophotography?
Congrats on this, not only you got credits on a feature movie, you got one of the good ones. Cloud 9 for you, enjoy!
On a personal note, I find it very refreshing to hear that a major studio opted for real captured photography. Love that they specifically wanted the authenticity of real narrowband data and that speaks to the production team's vision. Enjoy the premiere night, feel incredibly proud. I was already planning on watching the movie this weekend (it releases here on the 26th) and now I'm doubly excited because I know this neat little tidbit.
I'm pretty sure this "Dad did something crazy" moment is going to be a core memory for your kids. Congrats!
As an example terrestrial telescope mirrors get dusty. You're not going to break down the scope just to clean up the dust as this is a many days operation in most cases. So instead you would take "flats" that were of a pure white background and thus showed the dust in its full, dusty, glory. When you take your actual images, you negate (subtract from the original image) the flat and thus any noise generated by the dust. You can use this same method for removing brighter stars from an image that would otherwise saturate the ccd and wash out the background. Turns out it doesn't work for planes. Ask me how I know!
Why does the technique not work with aircraft? Because they generally fly on fixed routes?
This time-lapse probably better visualizes it: https://www.youtube.com/watch?v=wFpeM3fxJoQ
Maybe for some close stars parallax might work to remove them over the course of half a year. But no way could the Earth's rotation during a single night move background stars out of a nebulae.
Don't forget that not only does the star need to be removed, but also the diffraction spikes. Those are internal reflections in the lens assembly - not mapped by any star catalog ))
Its done with using dedicated astrophotography software (StarXTerminator). Example: https://astrobackyard.com/starnet-astrophotography/
So these are more artistic photo works than real science photos...
Rod Prazeres the Astrophotographer, has given this interview where he talks about the process: https://www.astronomy.com/observing/the-astrophotography-of-...
I agree that the claim "no generative AI used" is technically incorrect, but I do feel that the image does not contain any AI-hallucinated content and therefore is an accurate representation of reality. These structures appear in the image exactly as they exist in nature.
AI-related definitions aside, if it's a strictly subtractive/destructive tool that is removing light, it's arguably not much different to filtering frequencies!
I have yet to see a precise technical definition of what "generative AI" means, but StarXTerminator uses a neural network that generates new data to fill in the gaps where non-stellar objects are obscured by stars. And it advertises itself as "AI powered".
But that's very different to saying that no generative AI was used at all in their production. "AI augmented" sounds pretty accurate to me.
Likewise, if someone posted a photo taken with their iPhone where they had used the built-in AI features to (for instance) remove people or objects, and then they claimed that no AI was involved, I would consider that misleading, even if the photo accurately depicts a real scene in other respects.
https://astrobackyard.com/starnet-astrophotography/
“StarNet is a neural network that can remove stars from images in one simple step leaving only the background. More technically, it is a convolutional residual net with encoder-decoder architecture and with L1, Adversarial and Perceptual losses.”
* https://www.youtube.com/watch?v=YtfzjehDg74
* transcript: https://otter.ai/u/PA9dbWFA7BgPgLZN9CSo1WFAjXk
* https://www.iflscience.com/wildlife-photographers-viral-squi...
* https://markmangini.com/Mark_Mangini/Blog/Entries/2021/11/7_...
Story of her 'adopting' the squirrels:
* https://www.youtube.com/watch?v=3tDlh62AVPo
The name of the squirrel is "Baby Pear"; her viral tweet:
* https://twitter.com/DaniConnorWild/status/127534941750838476...
My wife and I saw the movie this weekend, we thought it was great. I adored the book, yet I recognize a book can’t be perfectly translated to the screen.
I thought the directors did a good enough job at translating the sci-fi into something the masses would enjoy.
Kudos to you
That said, I never read Harry Potter and can't imagine going back and reading it now. So, YMMV.
But I am glad I read the book first, I got much more out of it - it goes a lot more in depth into the science and engineering challenges that occur throughout. Which I appreciated. I'm not sure I would have read the book in the same way if I had seen the movie first.
I tend to prefer movies as a storytelling medium, and enjoyed watching the story unfold that way. I ended up just wanting to know more about things that were implied in the movie but not explained, and the book filled in those gaps well.
So if you want to do both, and want to get something new when you do each, then, having done it that way, I would recommend it.
Edit: reviewing my app history, it took me somewhere between 10-11 hours to read the book, and I do not read fiction especially fast.
Watching the movie first will set the stage for a lot of details that work better in a book than a movie.
By reading the book first, you’ll have a better background and understanding of the context of the plot, the science, and the overall objectives of the mission. There are also several “twists” in the book that were cut from the movie for runtime.
I enjoyed the movie after reading because I got to see the story “come to life”.
But I could also understand the perspective of enjoying the movie first, and then having the story/world expanded 8x with a 16hr book.
You’d could equate “movie -> book” order to watching the LoTR standard editions first, and then watching the extended editions.
I listened to the audiobook narrated by Ray Porter (on Audible) and would recommend that production if you enjoy audio.
I don’t think you can go wrong either way :)
The book is more of a true sci-fi novel, with the relationship stuff keeping it interesting.
I liked both a lot, and think both could be enjoyed fully with or without the other, in either order.
I would recommend reading the book first at least.
I recommended it to a co-worker, who ended up going with the audio book, and found he found it good.
Telescope: William Optics UltraCat 76 Mount: Sky-Watcher Wave 150i Camera: ZWO ASI2600MM-Pro
I am currently reading the book.
No one has ever or could ever observe a nebula with zero stars in the frame.
Amazing movie and the end credit visuals WERE incredible!
If by "not real", you mean "you removed the stars so it no longer reflect reality!", then real photograph doesn't exist. For example, OP uses narrow-band filters, and it's common to map H-alpha wavelength, which is red, to green in the images. Does that make it unreal?
In the end, astrophotography is more art than science; the goal is more about producing aesthetically-pleasing images than doing photometry. Photographers must take some artistic license.
“StarNet is a neural network that can remove stars from images in one simple step leaving only the background. More technically, it is a convolutional residual net with encoder-decoder architecture and with L1, Adversarial and Perceptual losses.”
Fine, but is still art photography with heavy processing. Not to criticize the amazing work of Rod Prazeres, who has now commented on this thread.