Inferred Metadata is metadata that can be assumed from other metadata without an external information source. It may be used to help derive what would otherwise have to be Added metadata. Or, to put it another way, Inferred metadata makes intelligent guesses based on the information — Source, Added or Derived — available.
An example would be the concept of Events within iPhoto and iMovie where files that have similar time stamps are assumed to belong to the same event. So, if you have no photos Tuesday then 50 photos Wednesday night, it’s reasonable to infer that there was some event on Wednesday night that those photos belong to.
Another example is the way that iMovie and Final Cut Pro X use Content Auto Analysis to search for faces. The size and number of faces is used to infer the type of shot: CU, Medium, Wide, etc.
We could take the Derived street address from the GPS metadata and use the business or activity information from that location to infer something about the event that would be useful when building an edit electronically.
For example, if the event is at a Church, and it’s Saturday afternoon then we can infer that this is likely to be a Wedding, and an appropriate editing algorithm could be applied. If there were an event in a residential location before, and an event in a reception venue, restaurant or hotel afterwards, then the inference of this being a Wedding would be stronger.
I am assuming that the way a Wedding is edited — i.e. the algorithm used — will differ from the way a documentary is edited, or a new item is edited.
On top of Source, Added and Derived metadata, Inferred metadata becomes a valuable means of determining how the source media should be edited.
Return to Where do we get Content Metadata.