Aftershoot Pro review: The promise of an AI assistant who’ll take away all the gruntwork: Digital Photography Review


Aftershoot aims to lighten the professional photographer’s burden by automatically culling large image galleries to sort the wheat from the chaff, and then performing basic edits that match your own editing style.

Unless you’ve been hiding under a rock, you’ve noticed the onslaught of AI these days. As the state of the art of machine learning moves forwards in leaps and bounds, it’s making headlines on a daily basis. And in pretty much every field, software makers are quickly integrating new AI tools into their creations.

This is definitely true of photography software, and one salient example can be found in Aftershoot Pro, a program aimed at professional photographers who capture images en masse, and who tend to have clearly-defined subjects and shooting styles. Aftershoot uses AI-based algorithms to automatically cull and edit photos from large shoots, making light work of the hitherto-tedious tasks going through enormous piles of photos.

Aftershoot works best on photos that share a distinct style. If you’re a portrait, event or sports photographer who churns out similar photos by the ream, you’ll want to take a closer look at what Aftershoot can do to speed up your workflow. If you’re a more casual shooter who tends to flit from subject to subject, or someone who’s constantly pushing your boundaries and reinventing your artistic style, well…you aren’t really the target here.

Key features

  • Automatically culls your images and assigns ratings and color tags to each one
  • Groups duplicates and flags shots that are blurred or have closed eyes
  • (Currently in beta) Can learn and mimic your editing style
  • Aimed at professionals who handle very large quantities of images
  • Available for macOS or Windows and works with Lightroom or Capture One
  • Charges a monthly or annual subscription but no per-image fees
  • Editing features will come at an extra, as-yet-unannounced cost

Compatible with both Windows and macOS machines and functioning as a helper for Adobe Lightroom Classic or CC and Capture One Pro, Aftershoot is available immediately to help culling your galleries. Its auto-editing functionality, meanwhile, is currently in public beta testing. Aftershoot Edits is also reliant upon Adobe Lightroom at the current time, although there are plans for Capture One support in the future as well.

Pricing is set at $14.99 per month if billed monthly or $9.99 per month if billed annually, and the software allows use on multiple machines with no cap on the number of projects and images you can process. Note, however, that while Aftershoot Edits is currently free during the beta period, it will involve an additional (and as yet undisclosed) fee once released. (And given its $499/year “unlimited lifetime” Founder’s Deal, it’s not likely to be cheap.)

Aftershoot’s right panel can be collapsed to save screen space, putting the emphasis back on your images when it’s not needed.

A clean and simple user interface

Aftershoot’s user interface is clean and straightforward. The bulk of its home page lists your various imported albums and prompts you to add new ones. A collapsible panel at screen right guides you through the process of creating editing profiles as well as recapping the results of your past culling work and giving information on where to get community support.

When you hover over an album or create a new one, you’re prompted as to whether you’d like to cull or edit the images therein. With AI doing the grunt work, both the culling and editing tools present only a relatively small number of options for the user to control. And to help you understand these, help and support functionality is present throughout, either inline as appropriate or through a tutorials option in the top menu bar.

Although its UI is quite straightforward, Aftershoot nevertheless comes packed with tutorial videos. An extensive What’s New change log and a tool to request tech support are also included.

I found the video tutorials to be very helpful. That said, the program is fast evolving and they do occasionally refer to features which have been changed since the video was recorded – for example, the tutorial on Filtering refers to a filter group called ‘sneak peeks,’ which has since been renamed ‘highlights.’ Those inconsistencies can at times prove to be just a little confusing.

Quickly tame unruly galleries with Aftershoot Cull

The current meat of the program is automatically culling large galleries to extract your keepers. To get started with culling, you simply point Aftershoot at a folder (or a nested group of folders) containing the images you’d like to cull. Then you select whether you’d like the program to look at Raws, JPEGs or both formats.

If your chosen folder is on a flash card, rather than working there you can also opt to have Aftershoot handle ingestion for you at this point. It will then copy your ingested images to up to two different drives as part of the culling process. Another option lets you decide whether to share image data with the company to help further refine its culling algorithms. You can disable that if you want and keep everything local to your machine.

Culling is extremely simple: you select one of eight shooting types, enable or disable a handful of options and adjust the thresholds for a few variables, then the cull happens automatically.

Although it does allow manual culling by browsing your images and using keyboard shortcuts to rate or color tag them, the point of the software is to let it loose to work its automated culling functionality.

To kick things off, you first need to tell Aftershoot what kind of photo shoot your images are from; the options are Weddings & Engagements, Portrait & Headshots, Family Portraits, Boudoir, Sports, School or Newborn. There’s also an Others option if your shoot doesn’t fall neatly into one of these categories.

You can tell Aftershoot what percentage of the overall shots you’d like to be offered up as highlights or selected from each group of duplicate images. You can also adjust its threshold levels for identifying blurred or duplicated images. Finally, you can switch on or off the program’s blur, duplicate and closed-eye-detection algorithms.

Once culling is complete, you can quickly review the assigned ratings and color tags. With an image selected, the right panel shows you both key faces within it and duplicate versions of it.

Decent culling performance even on older hardware

And that’s all there is to it. Once you’ve made these few choices, Aftershoot will examine all of your images, assigning ratings and color tags automatically based on how it evaluates each shot.

The process is reasonably swift even on modest hardware: it went through around two 24MP images per second on my 2018-vintage Dell XPS 15 9570 laptop with NVIDIA GTX 1050 Ti Max-Q graphics processor while using Aftershoot’s default Medium performance setting. On high-end hardware, you can expect orders of magnitude greater performance.

The top menu allows you to switch between Low, Medium or High system resource usage when culling, which is handy if you want to use your machine for other tasks while culling.

Extensive options are provided to filter your images, not just by ratings or color tags, but also variables like orientation, camera or lens model and even (if detected) their serial numbers.

Reviewing its work: generous filtering options and manual overrides

Once the automatic culling has been performed, Aftershoot shows your images alongside their ratings and color tags, viewable either singly or as a gallery. Totals are shown for each category of images.

Its default ranking categories are Selected (which includes both four- and five-star images, with the latter receiving a green color tag), Highlights (four-star/blue), Blurred (two-star/red) and Closed Eyes (one-star/purple).

You can filter the gallery by ratings or color tags, as well as by a wide range of other variables including orientation, file type, camera or lens type and serial numbers.

You’re also shown how many images are grouped as duplicates or have other warnings, and when you select an image you’re shown closeups of key faces in it, to allow you to judge their expressions. (And if you click on a face in the right panel, you’ll get an enlarged view of that face in the main panel for even better visibility.) For duplicate groups, when you select one image you’re shown the neighboring dupes in the right panel.

You can filter the gallery by ratings or color tags, as well as by a wide range of other variables including orientation, file type, camera or lens type and serial numbers, as well as embedded photographer names and whether or not faces or duplication were detected.

If you disagree with any of the automatic culling choices, you can override it at this point, either with user-configurable keyboard shortcuts or by clicking icons on-screen. (The user interface works nicely with a touchscreen, too.)

Once culling is complete, Aftershoot can export your gallery complete with ratings and color tags to Lightroom Classic/CC or Capture One. If you don’t use these programs, you can export images to a folder instead before using your image editor of choice.

Don’t use Adobe or Capture One? That’s just fine!

Once you’re satisfied with the results of culling, you can then select the rating(s) you wish to export. On clicking the Export button, just these images will be handed off to either Adobe Lightroom Classic/CC or Capture One Pro for editing.

Alternatively, if you’re not a Lightroom or Capture One user, you can have Aftershoot move or copy your chosen images to a different folder, optionally renaming them along the way. This gives you a handy folder of keeper images on which you can then use any other image editor of your choice.

But enough of the theory, how does all of this perform in real-world use? Let’s break it down.

Duplicate grouping: works well, but only within Aftershoot itself

In my testing, I found that the duplicate-grouping functionality worked well, and made much lighter work of going through large shoots with liberal use of burst capture or multiple attempts at the same basic scene. Aftershoot automatically chooses what it considers to be the best shots from each set of duplicates, and you can opt to have 10%, 20% or 30% of the images in each set selected for you.

When exporting culled images to a folder you can have your images automatically renamed, restoring some order even if your image editor doesn’t recognize the ratings or color tags.

There are a couple of quirks to note, though. Firstly, if there are no sharp, eyes-open shots within a set of duplicates, the algorithm will still choose what it considers the best of the set to feature. This can result in less-than-optimal images appearing in your gallery, but Aftershoot’s reasoning does make some sense.

Aftershoot’s makers reason that if a whole group contains blurred or eyes-closed shots, you may have had artistic reasons for it. And in fairness, if this wasn’t the case and they actually are all unwanted shots, it’s easy enough to remove them.

The second thing to be aware of is that the set-grouping is meaningful only within Aftershoot itself. Your photos won’t remain grouped similarly once you export them to Lightroom or Capture One.

Aftershoot automatically detected the closed eyes in this shot and so flagged it with both a one-star rating and purple color tag.

Closed-eye detection: a definite time-saver

As for the closed-eye detection, I found it to be quite useful. Although it didn’t detect closed eyes 100% of the time, in 100% of cases where it identified them it did so correctly. This means that it’s definitely saving you some culling time even if some work may still remain to do.

Blur detection: feels a bit more hit-and-miss

The blur detection seemed to be a bit more hit-and-miss to me. I found both some shots which were incorrectly marked as being blurred when they were sharp, and also a fair few shots that were blurred across the entire image but which were not correctly detected as such. (And that was true even if I changed the threshold to the highest-level ‘Strict’ option.

This photo, being viewed at Aftershoot’s 300% zoom level, was incorrectly tagged as being blurred but it’s pretty clearly crisp across most of the shot as shown in the full-sized version here.

In fairness to Aftershoot, though, a lot of the images which I had on hand for this review were motorsports images, which honestly fell a little outside of its current bailiwick. (As it happens, the company has coincidentally been soliciting motorsports shoots with which to hone its culling and editing algorithms while I’ve been working on this review, so that seems likely to change.)

Right now the program seems best-suited to portraiture of varying kinds.

Right now the program seems best-suited to portraiture of varying kinds, with the bulk of its shooting styles being portrait-related. The instances of incorrect blur detection seemed far fewer when dealing with portrait shots than with, say, race cars or other sports shots. But still, it feels like detecting sharpness shouldn’t be that challenging, and I’d like to see Aftershoot refined to better identify when this is the case.

This shot, meanwhile, wasn’t tagged as blurred even though nothing in the frame is even close to being in focus, as you can see in the full-sized version here.

Keeper choice: not on par with a human editor

Overall, Aftershoot doesn’t (yet) seem to understand photos the same way a person would. That can lead to some decidedly strange decisions when determining its star ratings.

I noticed more than a few shots with things like horseback riders whose heads were cut out of the top of the photo, or individual race cars that were partially cut out of one side of the frame, that were nevertheless given five-star ratings. Again, though, I didn’t really see similar issues when dealing with portraiture.

Aftershoot’s culling decisions can sometimes be pretty wacky, making choices a human being never would. Here, the totally unusable image on the left was given a five-star rating, while the one at right merited just three stars despite being crisp and quite usable. Even the horse appears to be ‘smiling’ for the camera!

The AI editing beta impresses, but needs a large library to learn from

When it comes to its AI Editing app, currently in beta, Aftershoot does a pretty decent job with cropping and rotating, but again there are some provisos. Firstly, for the time being the AI editing features work only with Adobe Lightroom. Capture One support is on the cards, but it’s not yet available even in beta testing.

You also need to generate training profiles separately for black-and-white and color shots, and the training process will be much faster if your Lightroom database includes smart previews. (If it doesn’t, you’ll be prompted to install Adobe DNG Converter, and Aftershoot will then be able to work from the Raws, but will take a fair bit longer to do so.)

Even more significant, though, is the sheer number of photos you need to provide to train the AI on your editing style in the first place. You’ll need to let Aftershoot examine at least 2,500 hand-edited images to make a rough profile of your editing style, and it’s recommended to provide at least 5,000 shots for training to get a really accurate profile.

Here, Aftershoot Edits is attempting to replicate my own style after being trained on a gallery of 2500+ images that I hand-edited. I’d say it did a pretty good job of copying the punchy, bold look I used in the images on which it was trained, and it also tightened up the framing a little.

And if you have multiple styles – for example, to account for venues with different lighting or to give different visual feels based on your subject matter – you’ll need to train the AI multiple times, providing the full quantity of images each time. For a busy pro with a large archive that’s not necessarily such a big ask, but if you tend to vary your style or subject matter a lot, it may prove a challenge to provide enough training material.

When trained, a decent first pass at your editing

But with that said, I found that the beta editing algorithm already did quite a good job of mimicking my style, even when ‘only’ given a little over 2,500 images to learn from. For the time being, its edits are limited to straightening, cropping, exposure, white balance, hue/saturation/lightness, presence and details. Support for tone curves and local adjustments are apparently planned for the future, but not available yet.

I trained it with a raft of NASCAR images I’d shot at the Daytona Speedway, that I’d hand-edited to have a quite bold, crunchy and colorful feel for visual impact. After training I had the program apply this same style to some dune buggy shots from the Amargosa Desert near Las Vegas – to good effect.

With greatly-differing lighting between the two venues – some of the NASCAR images were shot at night under artificial lighting – I did see some slight white balance issues on a small portion of the desert shots, but much of the editing work had already been done for me with very little effort (and surprisingly quickly, to boot).

For this comparison, I used the same profile from my Daytona shoot to edit some dune buggy images to quite good effect, even though the shooting environment was very different indeed.

Applying that same profile to other sports like tennis worked to some degree as well, but with a greater proportion of shots showing issues in exposure. I can thus confirm that it definitely helps to have a good library of similar shots with which to conduct the training.

Given that this feature is still in beta and thus its results liable to change, I’m not going to analyze things too closely for the time being. I will note that I also found the AI’s cropping and straightening tools to work quite well, although – much as with culling – the AI didn’t always seem aware of what the subjects were, and would sometimes crop out portions of the subject. This is easy enough to undo in Lightroom when it happens, though.

Conclusion

I come away from this review just a little conflicted. On the one hand, Aftershoot’s subject rating and blur detection algorithms do have a tendency to stumble too often right now. They give ratings and tags to images which they clearly don’t merit on a far-too-regular basis, especially when dealing with more challenging, sports-based subjects. (As noted, the program does a much better job with straightforward portraits.)

The potential time savings on offer for photographers who are dealing with images by the thousands or tens of thousands can’t be understated.

But at the same time, the speed with which it can pore over your images even on quite modest hardware is quite impressive. And the potential time savings on offer for photographers who are dealing with images by the thousands or tens of thousands can’t be understated. When it does a good job, it can save a busy photographer many, many hours of tedious gruntwork, and give a good head start on the editing process as well.

Really, I think what this program most needs is a more robust ability to detect and account for subjects the way that a human being might. Subject recognition technology is already quite widespread in our cameras these days, and the evolution of similar technology in Aftershoot could help it better understand when it’s dealing with an unattractive composition or an unusable shot.

Even when presented with a radically different subject to that used to train its AI, Aftershoot Edits still did a pretty good job, although I did notice some exposure and white balance issues among its automatically-edited versions from this tennis shoot. Had I had more images to work with, I could’ve simply created another profile for editing this different style of images for better results.

Overall, though, I think Aftershoot already shows a lot of promise. And with the speed at which AI technology is developing, I don’t doubt that many of my concerns will be resolved in the not-too-distant future.

As of this writing, with a very affordable pricetag of just $15/month to get your foot in the door and the editing features available in beta, it costs very little to test out Aftershoot with your own subjects and editing style to see how it performs. If it works well for you, it could free up a whole lot of your time to be out finding more business, shooting more images and making more money. It’s a pity that the finished editing program will come with a (likely fairly steep) fee attached – though likely still less than the cost of hiring a human assistant for the same task.

Above all, Aftershoot is clearly hinting at the way forward for the industry, and both it and AI in general are bound to get much farther along in a very short time!

What we like What we don’t
  • Clean, simple interface with plenty of hand-holding for new users
  • Has the potential to save hours of tedious culling and editing
  • No per-image or per-shoot charges
  • Pretty decent performance even on modest hardware
  • Leveling, cropping and duplicate detection work nicely
  • Blink detection doesn’t yield false positives
  • AI editing shows lots of promise
  • Editing features look set to be pretty pricey
  • Needs better subject detection / recognition
  • Needs a lot of already-edited images to train its editing algorithms
  • Has a tendency to rate photos inaccurately
  • Blur detection fails too often
  • Duplicate grouping works only within Aftershoots
  • Some tutorial videos are outdated



Source link