CEP, Extendscript, and XMP API – notes so far

Lately I’ve been working on getting a CEP panel working for reading and writing metadata to / from images in Adobe Bridge. Many years ago I did create a File Info panel using the File Info SDK with MXML and ActionScript3, but Adobe dropped compatibility with File Info Panels created that way quite a while back.

Although Adobe do still offer a File Info SDK, it seems the current recommended way to do most things in this sort of vein is with CEP. So I thought I better try creating my panel using CEP, thinking that the File Info panel may be retired soon while CEP will hopefully have a longer life.

I haven’t found it very easy, so I thought I would share some of the stuff I’ve had to work out so far. No doubt I will be posting at least one more of these as I discover more issues. The points below relate to CEP, the ExtendScript portion of CEP, and the XMP API for ExtendScript.

CEP, Extendscript, and XMP API - notes so far

Debugging CEP panel ExtendScript

This is not possible. Instead you have to download and install the ExtendScript Toolkit. Start the Toolkit and open the ExtendScript file you want to debug. Open the application you want to debug it with (in my case Bridge), then manually run the function(s) you need to debug from the Toolkit.

The ExtendScript Toolkit is pretty poor for object inspection. Some objects are listed in the Data Browser pane, but many are not, or don’t list what properties and methods they have. Similarly, entering a variable you want to inspect in the console will just print the toString() of that object, not show the actual object with its properties and methods like you’d get in a web browser console.

This is particularly annoying due to the lack of documentation on the various classes. You can’t even inspect instances of some of the classes yourself to see what properties / methods are available.

Dealing with more complex properties like bags of structs

When trying to find the path to a more complex nested property, it can be useful to use an iterator. Then on each iteration pause and check the current node to see exactly what the path to it is, e.g.

var obj = xmp.iterator(null, XMPConst.NS_IPTC_EXT, 'LocationCreated');
var prop;
while (prop = obj.next()) {
    debugger; //in console check prop.path

Once you know the path to the property you want, then you can get the value of a struct property like:

xmp.getStructField(XMPConst.NS_IPTC_EXT, 'LocationCreated[1]', XMPConst.NS_IPTC_EXT, 'City');

Or more directly like:

xmp.getProperty(XMPConst.NS_IPTC_EXT, 'Iptc4xmpExt:LocationCreated[1]/Iptc4xmpExt:City');

Though to avoid problems where a namespace is using a different prefix, it would be more compatible to use:

var prefix = xmp.getNamespacePrefix(XMPConst.NS_IPTC_EXT);
xmp.getProperty(XMPConst.NS_IPTC_EXT, prefix+':LocationCreated[1]/'+prefix+':City');

Note that the 1 in LocationCreated[1] is the first item in the bag – the index starts from 1, not 0. Also note that XMPConst.NS_IPTC_EXT is not an ‘official’ constant, I just added it: XMPConst.NS_IPTC_EXT="http://iptc.org/std/Iptc4xmpExt/2008-02-29/";

Writing to RAWs / XMP sidecar files

When trying to open a RAW file (CR2 in my test), then trying to open it for updating the XMP metadata fails, e.g.

var xmpFile = new XMPFile(file.fsName, XMPConst.UNKNOWN, XMPConst.OPEN_FOR_UPDATE);

Will throw the error:

Error: XMP Exception: OpenFile returned false

If you instead try to open the XMP sidecar, it will create the XMPFile instance okay, but doesn’t actually pull the XMP – calling xmpFile.getXMP().serialize() will return an XMP packet containing no properties. Calling xmpFile.canPutXMP(xmp) will return false. If you try and call xmpFile.putXMP(xmp); anyway, you’ll get:

Error: XMP Exception: XMPFiles::PutXMP - Can't inject XMP

When trying to write straight to the XMP sidecar file using the File class (rather than XMPFile) you must explicitly set the file encoding, e.g.


As otherwise ExtendScript doesn’t detect the file encoding correctly and uses your system default encoding. This means when you try and write the serialized XMP (which is UTF-8) into the file the write will fail. At least, this is the case on my Windows PC. In the case where your system encoding is UTF-8, or if your XMP sidecars contain a UTF-8 BOM then you probably wouldn’t need to set the file encoding, since it would already be correct.

Getting Bridge to refresh the metadata

I noticed that after saving the metadata, Bridge would not update the displayed metadata in the metadata panel unless I put a debugger or alert statement at the end of the function. The old metadata would persist even if selecting another image and then switching back to the modified image. To fix this you can force a refresh:


Grabbing an Alt-Lang array

Say you want to get all the values of dc:title, which is an alt-lang type. You can’t use getLocalizedText() as that’s for getting a single value where you know what locale you want. I want each locale with its value.

My first attempt was using an iterator. I thought this would just iterate over the entries in the array, and I could use the locale property to determine the language:

var obj = xmp.iterator(XMPConst.ITERATOR_JUST_LEAFNODES, XMPConst.NS_DC, 'title');
// Order does matter - first 1 should always be the default according to RDF spec
while (val = obj.next()) {
    oProperty.value[val.locale] = val.value;

However, I didn’t read the docs carefully enough as locale is apparently only set by calls to getLocalizedText(). Weirdly it was set on the first item though, but was actually set to the namespace?! On the second iteration it then pulled the lang attribute. So it must iterate through the node values and node attributes as if each was a separate node. You could put something together that uses this method, first pulling the values, then the lang values, and finally consolidating them together. But it seems a bit messy.

Next try was using getArrayItem:

for (var i=1, oArrayItem; oArrayItem = xmp.getArrayItem(XMPConst.NS_DC, 'title', i); i++) {
    oProperty.value[oArrayItem.locale] = oArrayItem.value;

Now maybe I’m doing something wrong there, but it seemed to keep going through values (got up to 8 before I cancelled it), even though I only had one entry in the array.

My final solution was just using the basic getProperty() for the value and getQualifier() for the lang:

for (var i=1, oArrayItem; oArrayItem = xmp.getProperty(XMPConst.NS_DC, 'title['+i+']'); i++) {
    oProperty.value[xmp.getQualifier(XMPConst.NS_DC, 'title['+i+']', XMPConst.NS_XML, 'lang')] = oArrayItem.value;

Property creation options

These seem to be documented only in information on certain methods, not under the documentation on the XMPConst class. As I mentioned earlier, the Toolkit is quite poor for inspection, so unfortunately you can’t inspect the XMPConst object to see exactly what properties it contains. There could actually be more than these.

The item is an array (of type alt, bag, or seq). Creates a bag (unordered array).
Item order is significant. Implies XMPConst.PROP_IS_ARRAY. Creates a seq (ordered array).
Items are mutually exclusive alternates. Implies XMPConst.PROP_IS_ARRAY. Creates an alt (alternative array).
The item is a structure with nested fields. Creates a struct (node with rdf:parseType=”Resource” set).

Setting an Alt-Lang entry

It seems xmp.setLocalizedText() is not actually suitable for this. If I call:

xmp.setLocalizedText(XMPConst.NS_DC, 'title', null, 'x-default', 'X default text value');

We end up with one x-default entry in the alt-lang array, as we would expect. However when we try and add more than one value:

xmp.setLocalizedText(XMPConst.NS_DC, 'title', null, 'x-default', 'X default text value');
xmp.setLocalizedText(XMPConst.NS_DC, 'title', 'en-GB', 'en-GB', 'UK text value');

We end up with a x-default and en-GB entries in the array, but both with the 'UK text value'. If we instead swap the order to try and set the x-default value last, the same thing happens:

xmp.setLocalizedText(XMPConst.NS_DC, 'title', 'en-GB', 'en-GB', 'UK text value');
xmp.setLocalizedText(XMPConst.NS_DC, 'title', null, 'x-default', 'X default text value');

Will result in an array with a x-default and en-GB entries in the array, but both with the 'X default text value'.

If we do:

xmp.setLocalizedText(XMPConst.NS_DC, 'title', null, 'x-default', 'X default text value');
xmp.setLocalizedText(XMPConst.NS_DC, 'title', 'en-GB', 'en-GB', 'English GB title');
xmp.setLocalizedText(XMPConst.NS_DC, 'title', 'en-US', 'en-US', 'English US title');

We end up with:

dc:title  (0x1E00 : isLangAlt isAlt isOrdered isArray)
 [1] = "English GB title"  (0x50 : hasLang hasQual)
       ? xml:lang = "x-default"  (0x20 : isQual)
 [2] = "English GB title"  (0x50 : hasLang hasQual)
       ? xml:lang = "en-GB"  (0x20 : isQual)
 [3] = "English US title"  (0x50 : hasLang hasQual)
       ? xml:lang = "en-US"  (0x20 : isQual)

So it seems like the first ‘real’ RFC 3066 language you set is used as the x-default value, i.e. you can’t have an alt-lang array where the x-default value is unique, other than where the x-default value is the only value. However, this is not how the metadata panel in Bridge works or the XMP File Info panel works. In both of these, only the x-default value is shown, and changing this value will only save the change to the x-default value – all other language values are left unchanged. I don’t know if this is the correct behaviour or not, but I am inclined to copy Bridge / File Info panel’s behaviour, and allow unique x-default values.

So to do this you need to set the array entries like a standard array, and then set the xml:lang attribute on each entry:

// Create the property as an array
xmp.setProperty(XMPConst.NS_DC, 'title', null, XMPConst.ARRAY_IS_ALTERNATIVE);
// Create the array entry for default
xmp.appendArrayItem(XMPConst.NS_DC, 'title', 'X default text value');
// Add the xml:lang property for default
xmp.setQualifier(XMPConst.NS_DC, 'title[1]', XMPConst.NS_XML, 'lang', 'x-default');
// Create the array entry for UK
xmp.appendArrayItem(XMPConst.NS_DC, 'title', 'UK text value');
// Add the xml:lang property for UK
xmp.setQualifier(XMPConst.NS_DC, 'title[2]', XMPConst.NS_XML, 'lang', 'en-GB');

Including multiple ExtendScript .jsx files

The CEP ‘Cookbook’ does have a section on this, however it doesn’t make any sense to me. This is what it says:

// After finishing loading the jsx file refered in the manifest.xml, please use evalScript of CSInterface to load other jsx files.
// "anotherJSXFile" is not the first loaded jsx file, so the value of "$.fileName" in it's stage is correct.
CSInterface.evalScript('$.evalFile(anotherJSXFile)', callback);
// Or in the first loaded jsx file, load another jsx file, and the value of "$.fileName" is correct in this file.
// Given the code is running this example.jsx which is referred in the manifest.xml. 
// In the stage of "hardCodeJSXFile", the value of "$.fileName" is correct too.

In that first example, how are you meant to be getting the path of ‘anotherJSXFile’ so you can load it to find out what the extension path is? Likewise on the second example, how is your JSX going to get the path to ‘hardCodeJSXFile’ so it can be loaded?

You can’t use absolute paths, unless you are the only person ever going to use the extension / panel you’re working on.

If you use an //@ / # include line in your main JSX, the path to whatever file you want to load won’t be correct, and your JSX won’t work. If you use $.fileName from your JSX (or eval it as JSX from your JS), it will be an integer.

You can’t use a relative path as the CWD is not the extension dir. If you create a new File('./') and get the fsName, you’ll find the current path is the Bridge program directory. If you include multiple ScriptPath nodes in your manifest.xml, only one will be loaded.

Thankfully, it isn’t actually impossible to get the extension path:

var csIface = new CSInterface();
csIface.evalScript('$.evalFile(\''+csIface.getSystemPath( SystemPath.EXTENSION )+'/host/test2.jsx\')');

From your JS will load the file test2.jsx from the host subdir within your extension folder.

Posted on by xoogu, last updated

WordPress stuck in infinite redirect loop

I was trying to set up a local (development) copy of a site I manage today, but found that I was getting a ‘Too many redirects’ error when trying to load it. Eventually I tracked it down to the WordPress redirect_canonical() function, and more specifically is_ssl().

is_ssl() was reporting false even though I was requesting the site over https. And so it was redirecting to the https URL (as this is what I have set as the siteurl in the WP options). Thus causing an infinite redirect loop.

The cause of this problem and the solution can be found here: WordPress Function Reference – is ssl. The problem was that I was using a reverse proxy setup, so the apache instance running WordPress wasn’t using https, just the nginx server handling the initial requests was.

By adding proxy_set_header X-Forwarded-Proto https; to the nginx config and then if (isset($_SERVER['HTTP_X_FORWARDED_PROTO']) && $_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https') { $_SERVER['HTTPS'] = 'on'; } to the wp-config.php the problem is solved.

I’d be interested to know how this is normally handled in environments using reverse proxies, as I would think many shared webhosts use this structure, but users aren’t required to add checks for the X-Forwarded-Proto header in their wp-config just to get WordPress working on https. Or are they?

Posted on by xoogu, last updated

Building a PHP extension rather than having it compiled in

Today I wanted to add the PHP EXIF extension to my local PHP installation. According to the PHP manual to do this you should configure PHP with –enable-exif. However, I didn’t want to go through the tedious process of compiling PHP from scratch again. I just wanted to add the single extension.

I couldn’t find any info on how to do this, but thankfully it is actually quite simple, it’s pretty much the same as compiling any extension that doesn’t come with PHP as standard. This same process should work with any of the extensions that ship with PHP.

Continue reading

Posted on by xoogu, last updated

Stupid mistake it took me ages to fix #8761

I was having trouble with supervisord being unable to start nginx on a new dev vm I had setup. In the supervisor stderr log for nginx and nginx’s error.log I was getting:

nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /opt/nginx-1.9.12/conf/nginx.conf
nginx: [emerg] bind() to failed (13: Permission denied)

Continue reading

Posted on by xoogu, last updated

DNS changes not propagating

I had an issue lately where a new subdomain I’d added for a site wasn’t accessible. Trying to debug it, when I ran nslookup sub.example.com my.webhost.dns it returned the correct IP address of the server where the subdomain was meant to be pointing. But when I ran nslookup sub.example.com (that’s Google’s DNS server) then it couldn’t find the domain.

Eventually I tracked down the problem, and it was something very simple. The domain wasn’t actually set to use my webhost’s DNS servers. Instead I had it configured to use CloudFlare’s DNS servers.

So if you have this problem, make sure you double-check that the DNS server(s) you’re updating are actually set as the primary DNS servers for the domain. It might seem obvious, but it’s easy to overlook (at least it was for me!)

Posted on by xoogu, last updated

Nginx not gzipping files

I had a problem recently where Nginx wasn’t gzipping responses, despite having the necessary lines in my nginx.conf. In looking for a solution, I found quite a few posts covering various reasons why gzip might not be working. But none that fitted my case.

So, I thought I might as well share what the problem / solution was in my case, plus the other reasons why Nginx may not be gzipping files. Hopefully this will be helpful to anyone else trying to figure out why their Nginx gzip configuration isn’t working.

Mime type Not in gzip_types

As part of the gzip configuration, you need to specify what mime types should be gzipped.

gzip_types text/css text/javascript text/xml text/plain;

If you had something like above, but your javascript was actually served with a mime type of application/javascript, then it wouldn’t be gzipped because application/javascript is not listed in the mime types you want gzipped.

So the solution here is just to ensure you include all mime types you want gzipped after the gzip_types directive.

Item size too small

Normally, as part of gzip configuration you will include a minimum size that the response must be for it to get gzipped. (There’s little benefit in gzipping already very small files).

gzip_min_length 1100;

It can be easy to forget this and think that gzip isn’t working, when actually it is working, it’s just that you’re checking with a small file that shouldn’t be gzipped.

Using old HTTP version

This was what the problem was in my case. By default, Nginx will only gzip responses where the HTTP version being used is 1.1 or greater. This will be the case for nearly all browsers, but the problem comes when you have a proxy in front of your Nginx instance.

In my case, my webhost uses Nginx, which then proxies requests to my Nginx instance. And I’ve mirrored this setup in my development environment. The problem is that by default Nginx will proxy requests using HTTP1.0.

So the browser was sending the request using HTTP1.1, the frontend Nginx was receiving the request, then proxying it to my backend Nginx using HTTP1.0. My backend Nginx saw the HTTP version didn’t match the minimum gzip default of 1.1 and so sent back the response unzipped.

In this case you either need to update the proxy_http_version directive of the proxying server to use 1.1. Or you need to set the gzip_http_version to 1.0 in your config.

Client side software deflating

I think this is likely to be a rather unusual situation, but I found it described here: nginx gzip enabled but not not gzipping. Basically they had some security software installed on the client machine they were testing from. This software was deflating and inspecting all requests before they were sent on to the browser.

The same thing could happen if there was a proxy between you and the server that deflates any gzipped responses before sending them on to you. But I think it would be very rare to have proxy configured like that.

There could also be other reasons why Nginx might not be gzipping responses. For example, it could be you’re using a gzip_disable directive that matches. Or you have gzip off; somewhere later in your config. But I think the items above are likely to be the main reasons why Nginx isn’t (or looks like it isn’t) gzipping files when it should be.

Posted on by xoogu, last updated

Animation event not firing in MS Edge? This might be why

Recently I’ve been working on a widget that makes use of this hack using animation events as an alternative to DOM Mutation events. The nice thing about this method is that it lets you add the event listener on the element you want to get the ‘node inserted’ event for. Whereas with DOM mutation events, you must add the listener to the parent node. In cases where you don’t know where the node will be inserted, this means attaching the mutation listener to the body, and you have to filter all mutation events to try and find the one for your element. With the animation event method you don’t have that problem.

Anyway, to get on to the main point of this post, I was having a big problem with my widget working fine in all browsers (that support CSS3 animations) apart from MS Edge. It seemed very strange that something working in older IEs would not work in Edge. The problem was that the animation event was never being fired when the node was inserted. But when I tried the jsFiddle example from the backalleycoder post, that worked fine in Edge.

After much debugging, I found the issue. I had my keyframes like this:

@keyframes nodeInserted {
    from {  
        outline-color: #000; 
    to {  
        outline-color: #111;
@-moz-keyframes nodeInserted {  
@-webkit-keyframes nodeInserted { 
    from {  
        outline-color: initial; 
    to {  
        outline-color: initial;

@-ms-keyframes nodeInserted {
    from {  
        outline-color: #000; 
    to {  
        outline-color: #111;

@-o-keyframes nodeInserted {  
    from {  
        outline-color: #fff; 
    to {  
        outline-color: #000;

Initially I had the unprefixed @keyframes empty, but when playing with the jsFiddle example I found MS Edge didn’t like an empty @keyframes, nor did it like @keyframes changing the values from initial to initial. The problem with my CSS was that after defining the unprefixed @keyframes in a format Edge will fire an animation event for, I then have a webkit prefixed @keyframes using the initial values it doesn’t like.

MS Edge was picking up the webkit prefixed @keyframes, and using this as the value, since it comes later in the stylesheet than the unprefixed version. So the solution was simply to move the unprefixed @keyframes down to the bottom.

It seems a bit silly that MS Edge will pick up the webkit prefixed declaration, but doesn’t pick up the later ms prefixed declaration. But I guess that’s the kind of weirdness you come to expect from MS.

This foxed me for quite a while, so I hope this helps anyone else coming across the same problem.

Posted on by xoogu, last updated

Script to test / benchmark SQL queries

I’m not particularly knowledgable on the subject of optimising SQL queries, so the easiest way to optimise a query for me is to write a few variations then test them against one another. To this end I’ve developed a PHP class to do the testing and benchmarking. I think that even if I was highly knowledgable about optimising queries, then I would still want to test my queries to ensure that my theory held true in practice.

For a useful benchmark you need to execute the queries using a range of data that simulates the real data the queries would be executed with. They also need to be executed in a random order and multiple times, to ensure results can be averaged and reasonably reliable. That’s what this class does, along with providing a summary of the results in CSV format.

It should be noted that this class does not set up or modify any tables for testing with – it just allows you to supply a range of data to be included within the queries themselves, such as testing with a range of different values in a WHERE clause.

Continue reading

Posted on by xoogu, last updated

Issues compiling PHP dependencies

I decided to update PHP, and had a few problems compiling its dependencies. So I thought I’d share the problems I had and the solutions here for future reference, and maybe they might help someone else as well.

Continue reading

Posted on by xoogu, last updated

Does HTTP2 really simplify things?

I recently watched a webinar from NGINX on ‘What’s new in HTTP/2?’. In the webinar they go over how HTTP/2 differs from version 1, and what benefits it has. One of the benefits is that it allows you to use SSL with no performance hit compared to plain HTTP/1.1. The other benefit they go into is that it simplifies your work process. However, I’m not sure this simplification benefit has much truth in it.

Continue reading

Posted on by xoogu, last updated