We are proud to announce that Camera Bits, Mobius Labs, Microsoft, Smithsonian, CBC and many others will be presenting at the IPTC Photo Metadata Conference next week, Thursday 10th November. With a theme of Photo Metadata in the Real World, the event is free for anyone to attend. Register here for the Zoom webinar to receive details before the event.
The event will run from 1500 UTC to 1800 UTC. The full agenda with timings is published on the event page.
We will start off with a short presentation on recent updates to the IPTC Photo Metadata Standard from David Riecks and Michael Steidl, co-leads of the IPTC Photo Metadata Working Group. This will include the new properties approved at the recent IPTC Autumn Meeting.
A session on Adoption of IPTC Accessibility properties will include speakers from Smithsonian, Camera Bits (makers of the photographers tool Photo Mechanic), Picvario presenting their progress implementing IPTC’s accessibility properties, announced at last year’s Photo Metadata Conference.
The next session will be Software Supporting the IPTC Photo Metadata Standard, where Michael Steidl and David Riecks, co-leads of the IPTC Photo Metadata Working Group, present their work on IPTC’s database of software supporting the Photo Metadata Standard, and the IPTC Interoperability tool, showing compatibility between tools for individual properties.
Use of C2PA in real-world workflows is the topic of the next session, demonstrating progress made in implementing C2PA technology to make images and video tamper-evident and to establish a provenance trail for creative works. Speakers include Nigel Earnshaw and Charlie Halford from the BBC, David Beaulieu and Jonathan Dupras from CBC/Radio Canada, Jay Li from Microsoft, and a speaker yet to be confirmed from the Content Authenticity Initiative.
The next session should be very exciting: Metadata for AI images will be the topic, featuring an introduction to synthetic media and “generative AI” images, including copyright and ownership issues behind the images used to train the machine learning models involved, from Brendan Quinn and Mark Milstein.
Then we have a panel session: How should IPTC support AI and generative models in the future? Questions to be covered include whether we should identify which tool, text prompt and/or model was used to create a generative image? Should we include a flag that indicates content was created using a “green”, copyright-cleared set of training images? And perhaps other questions too – please come along to ask your own questions! Speakers include Dmitry Shironosov, Everypixel / Dowel.ai / Synthetics.media, Martin Roberts from Mobius Labs and Sylvie Fodor from CEPIC. The session will be moderated by Mark Milstein from vAIsual.
Last year we had over 200 registrants and very lively discussions. We look forward to even more exciting presentations and discussions this time around! See you there. (Please don’t forget to register!)