Where do we get Content Metadata

We get Content Metadata three ways:

  • It can be added by someone somewhere in the production process;
    • Even if that requires analysis.
  • It can be derived by computer algorithm; or
  • It can be inferred by compute algorithm.

Right now, almost all Content Metadata is added either in production using tools like Lumberjack System and Prelude LiveLogger, or in the edit bay before work can really get started.

Having Content Metadata on all your footage makes it easy to find the shot(s) you need or the topic your are trying to cover, wherever it is in your Project or Library. Without well logged (organized) footage you will waste hours trying to find any given take: if you can find it at all. Only the simplest project can be completed without investing in Content Metadata. Trouble is that is largely a manual process, and while adding it at the shoot is a huge benefit, it still requires manual labor by someone.

Derived Metadata

In the future, apps will have a much, much bigger role in the process. We can currently derive some Content Metadata. For example from GPS information in a file, we can derive Location, which is now standard in many still cameras, but very few video cameras. One huge class of camera that adds GPS information to video files are smartphones.

From speech we can derive text. From text we can derive keywords, as Lumberjack System already does. From keywords we can derive Keyword Ranges.

Read more on Derived Metadata…

Inferred Metadata

By combining data, we can infer a lot things, as we see in the Metadata and the Wider World article, where a full conspiracy – and conspirators – is uncovered simply using publicly available metadata.

As yet, there are no real-world applications of Inferred Metadata within the production world, but they will come.

Read more about Inferred Metadata…