If the PDF contains a text layer, you could try to use a script to access the record’s plainText
property and extract the fields from there. May or may not work, depending on the organization of the plain text – it need not follow the original layout. But that’s easy to determine.
But you’ve already burdened the host once. I don’t think doing it another time is more problematic than the first time. And you could combine this approach with “2”, i.e. access each HTML, retrieve all fields and add them as metadata to the corresponding PDF. Then you’ll probably have too much data, but that’s better than missing data, isn’t it?
Why would that be “ideal”? You’d simply duplicate information, and in a not very accessible format, at that.
How is that simpler than retrieving the data from the original HTML on the website and adding them as metadata once?
It actually does. I found that by looking at the scripting dictionary.
Nope. PDF has only a very limited number of standardized metadata.