Obviously, the easiest way to gather metadata is automatically, without requiring any human intervention. That’s Implicit metadata. If you have to specifically act to add metadata, that’s Explicit metadata.
Implicit metadata is derived when you do something that doesn’t seem like it’s generating metadata, such as:
- Watching a video on YouTube (view count metadata);
- Buying a product on Amazon (sales and popularity metadata);
- Skipping past a song on a music player because you don’t want to hear it (“like” metadata); or
- Using a Clip in a Sequence (Clip usage metadata).
Implicit metadata does not require additional work!
Similarly, Source metadata from the camera – usually technical – does not require any additional work to use it. These days the information is available in the NLE interface as metadata.
Explicit metadata is derived from an action by the user that creates an immediately identifiable piece of metadata. If you do things like:
- Rate a video on YouTube (you generate rating metadata);
- Rate a song in your music player library (you generate a metadata rating in your library);
- Add a vote for a site on Digg (Vote count metadata); or
- Enter log notes for a clip (Content or Logging metadata),
then you’re generating Explicit metadata.
Despite that, implicit metadata “indeed kicks explicit’s *ss.”
Explicit metadata takes work. Explicit metadata requires observation and analysis: stuff computers are good at that bores humans interested in emotion and story.