beware JSON.stringify

TLDR: Don’t trust JSON.stringify to hydrate objects returned by navigator.getCurrentPosition, or any other complex object with prototypical inheritance.  This will give you different results on different browsers.

In modern javascript code, JSON.stringify is really useful – it allows you to convert an object into a string which can in turn be converted back into an object again.  This is very often used when sending object data across the network to your server.  It is simple and it works well (almost all the time :) )

Javascript objects are passed by reference so this can cause some issues if you are not careful.  One common case where using copies of objects might be useful is when initializing new objects with object properties of another object.  If you don’t use a COPY of the object assigned to the property, any changes made to the object’s properties will be made to BOTH references, as the properties/variables both point to the same instance of the object.  This is easily avoided by simply copying the object like so:

obj.property = JSON.parse(JSON.stringify(object));

In certain environments, you may need to use a custom method for creating json representations of your object because framework code adds ‘temporary’ properties to your object for it’s own purposes that you probably don’t want copied to your new object.  Angular provides an angular.toJson() method for this purpose.  This code typically uses JSON.stringify under the covers.

All of this is fine and relatively well-known.  JSON.stringify is a reliable cross-browser tool that is used a lot – it ‘just works’.  Until it doesn’t :)

I spent the last week or more struggling with a really subtle bug in our html5 app that was only happening on IOS.  Certain properties of objects were being corrupted / removed.  However when debugging the objects all seemed well.  It turns out that JSON.stringify will only copy properties that are the objects ‘own’ properties  – ie the ‘top level’ properties of that object.  If you have a complex javascript object using prototypical inheritance, it will NOT copy properties that are from the prototype chain.  At least this is the case for SOME browsers (regardless of the specifications).  Basically once you’re dealing with objects of a specific type (created with function constructors), you can’t trust that JSON.stringify will do what you expect.  The JSON specifications do allow custom objects like this to provide their own .toJson() method, which JSON.stringify will use.  However not all library vendors (or browser vendors) bother to provide this feature.

The specific case I was dealing with was the location/position object provided by the navigator.getCurrentPosition() method.  The callback provides a ‘Geolocation’ object with appropriate values in several properties (eg latitude, longitude).  This information is something we wanted to persist to our database, as well as persisting it in localStorage on the users’s device (last known position).  JSON.stringify(position) would return “{}” rather than the fully featured object we expected.  One of the reasons this was tricky to troubleshoot was that this position object would be copied about at a few different points in our application.  Most critically of course at the time we would save the data to our server.

In a mobile device environment, it’s tricky to debug your code remotely.  We chose to provide a very simple view of the console.log calls on one of the screens in our app as a way for the user to quickly check for interesting log messages while debugging unexpected behaviour.  Yes I’m aware of other remote debugging and error reporting solutions.  We’ll be implementing one or more of those soon.  But this option took 3 minutes to spin up and has served it’s purpose.

However the simple code we were using to display the results was running objects through the JSON.stringify process in order to have a nice human readable view of the object you might be debugging.  In this case you’d see log statements showing you an empty object on one line, then the next line successfully printing out a property of that object. Very confusing :)

Luckily the workaround is pretty simple. You can provide/attach your own .toJson() method, or ‘copy’ the object’s properties yourself into a pojo (plain old javascript object) as soon as you receive the object.  Once you’re dealing with a pojo, JSON.stringify works wonderfully.  But be careful that ALL of the child properties of the object are copied over as native data types or pojos – in the case of the Geolocation object, it’s coords property is another complex object that does not JSON.stringify correctly (on IOS at least).

This is the stackoverflow post that led me in the right direction:

http://stackoverflow.com/questions/11042212/ff-13-ie-9-json-stringify-geolocation-object

a related post:

http://stackoverflow.com/questions/8779249/how-to-stringify-inherited-objects-to-json

angularjs adding controllers at runtime

Angular is fantastic. I’ve been using it for a couple of projects lately and have been learning a lot.  I had to solve a problem today that took quite a while to figure out, so I thought I’d post the solution here.

Normally you need to define all of your controllers, services, etc when the angular app is bootstrapped – IE you need to pre-load all of the scripts, regardless of what the current view is, etc.

It’s easy enough to load additional scripts at runtime, but the .controller(xxxx) method call will not affect the app after it has been loaded – so your controller code won’t run.  (apparently not a bug – it’s by design)
I’m working on a kind of plugin module system, where I wanted to inject the controller into the system at run-time.    The idea is that I want the controller code to load at the same time the view code is loaded, in fact I want to define both the view and the controller in the same file:
myfile.html:
<div ng-controller="mycontroller">
   <p>this is the view</p>
   <p>some value from controller: {{somevalue}}
</div>
<script>
 // the 'app' has been defined globally (see below)
 app.controller('mycontroller', function ($scope){
   console.log('mycontroller loaded')
  $scope.somevalue = 'some value'
});
</script>
where that file is included into the current page with an ng-include:
<div ng-include=”myfile.html”>
(actually in the future I’ll be loading the content from the database).
Out of the box, that doesn’t work in angular. But by overriding the .controller method in the .config event, it’s possible to have calls made to .controller work correctly _after_ angular has completely loaded.
// ensure the 'app' is available globally
window.app = angular.module('myapp', [ ... modules ... ]);
app.config( ... other dependencies... , $controllerProvider){
  app._controller = app.controller
  app.controller = function (name, constructor){
    $controllerProvider.register(name, constructor);
    return (this);
  });
})
The above code is not a fully working sample, just a simplified javascript translation of my original coffeescript code.  But it’s got the relevant bits correct. :)
The same approach would work for services and directives, although you’d use the $provide service instead of the $controllerProvider

Email

Email is hard.

At least SENDING emails from websites is.

Don’t feel bad if you’ve had trouble ‘getting it right’. You’re not alone. Many companies have made fantastically successful businesses out of making it easier and/or more reliable (mailchimp, etc). However, you may not have the volume of emails that would justify paying for a 3rd party service to deliver your emails. Perhaps you’re just sending signup confirmations or receipts for your small website.

You can easily send emails in almost any web programming language in a single line of code – what’s so hard about it? The hard part is actually getting those emails delivered to your recipients! :)

I’ve had to address this issue with some of my clients for one of my products that allow the customer to configure their email settings – ie the FROM email address and the option use of an SMTP server to send the emails. I wound up writing a primer on email services to help them understand their options and why they mattered. That primer is now part of that product’s documentation, but I thought it might be useful to others who might stumble across it here.

Email Settings

What email to use as the FROM address is entirely a marketing decision, but it has technical implications (spam blocking).

The thing that is confusing is the idea that an email FROM [email protected] does not NEED to be sent from and official email server @somedomain.com. ANY computer can send an email from [email protected], directly TO anyone@someotherdomain. The email does not need to pass through the email server of the ‘from’ email address at all (unless the person writing the email chooses to do so). The core function of email servers is to RECEIVE emails. Sending emails through official email servers is optional. This is the root of the email spam problem. Email is not secure in any way.

However most email SERVICES (such as hotmail, gmail) will block or flag emails coming TO their customers, if the ip address of the machine sending the email doesn’t match the official email server of the domain of the sender (@somedomain.com) (this is the case if the web server itself is used to send the email without using smtp). How hotmail or gmail actually flag or block the email in that case depends on whether the computer that sent the email is listed on one of several ‘blacklists’ of computers that are known to send spam. If the computer that sent the email is on the blacklist, the recipient won’t even see the email. If the computer that sent the email is not on a blacklist, they may get the email, but it will most likely be flagged as possible spam. Many shared hosting web servers are included in those blacklists.

Using an SMTP server to send an email with matching FROM address avoids this blocking and flagging by being more trustworthy to the large email service providers.

Another flavor of the problem is that even if you do use an smtp server, you may still be able to send emails using a different FROM address than the smtp server’s domain name. Some smtp servers will not allow this, but some will. There is nothing in the email specifications that says that the email FROM address needs to match the account of the user on the email server. So if you use a different FROM address than the smtp email account used to send the email two problems may arise: 1) the smtp server may not send the email. If it does, then 2) email services such as hotmail, gmail, may still flag or block the email as spam.

An additional complication is that some web hosting companies will not allow websites to SEND emails (via smtp or directly), unless they pass through their own email proxy servers. They do this to limit the amount of spam emails being sent by hacked websites (this is a good thing, although it complicates life for the rest of us). GoDaddy is one such service. You need to configure your email settings to match the GoDaddy documentation.

blocked 3rd party session cookies in iframes

If you use iFrames on your websites, you may have encountered the infamous ‘blocked 3rd party cookies’ issue that occurs in Safari – particularly on IOS7. Safari has defaults that are arguably more secure than most other browsers, but this winds up breaking some websites hosted in iFrames. The sessions that the website relies on do not work (users cannot login, etc), as the session cookie is not ‘trusted’ by the browser when the website inside the iframe is hosted on a different domain (or subdomain) than the parent website. In some cases simply changing the protocol (http vs https) can cause the same issue. http://stackoverflow.com/questions/11635105/block-third-party-cookies-workaround-facebook-apps-etc is one example of people trying to address this problem. Most of the solutions I found on the web for this were fairly complicated and required you to change the architecture of your site a fair bit.

However it a lot of cases, the solution can be pretty easy. The solution simply sends the ‘parent’ frame or browser window to the second domain temporarily to set a session cookie for the second domain, then redirect the user to the page on the first domain that hosts the iFrame. Once the browser has accepted the cookie from the second domain, then that domain is no longer considered ‘3rd party’ by the browser.

This can be done very easily and transparently to the user, with the use of a single file on the second domain which sets that session cookie and redirects the user back. Here’s a php example. This php file would be hosted on the same domain as the content of the iframe:

<?php
// startsession.php
session_start();
$_SESSION['ensure_session'] = true;
die(header('location: '.$_GET['return']));

Note that this file uses a ‘get’ parameter to decide where to redirect the user to. This is just for convenience – this could have been hard-coded, and you may need to handle url encoding of the parameter or deal with other security concerns. Those concerns are not related directly to this solution.

On the page hosted on the first domain (same domain as the one hosting the iFrame), create a link to the page on the first domain that hosts the iframe like so:

<a href="https://domain2/startsession.php?return=http://domain1/pageWithiFrame.html">page with iFrame</a>

On the first domain, the page with the iframe:

<p>Page hosted on domain1, with iframe content from domain2.</p>
<iframe src="https://domain2/index.php"></iframe>

At this point, the website hosted on domain2 will be able to set/use session cookies, because the user has explicitly authorized this on the parent frame by clicking on the link.

I’ve tested this approach successfully on IOS7. This works whether the parent domain is http or https.

This post was thrown together pretty quickly – let me know if you have any questions or have feedback on this solution.

Cheers,

Allan

html trick for wrapping long urls

These days, I spend a lot of my time working on mobile development (http://gardenbaysoftware.com).

In mobile development, screen space and layout are huge concerns. One challenge I’ve seen is how to display a long URL on a mobile device. In most cases you can just create a link and use some text-overflow techniques (text-overflow:elipsis). However if you really want to show the entire text of the URL, but have the invariable word-wrapping occur at the most visually appealing spots (after the forward slash character), it can be tricky. Not all browsers interpret the word-break properties similarly.

I came across a wonderful technique here: http://www.alistapart.com/articles/the-look-that-says-book/

Simply put, it uses a technique of adding a ‘non visible space’ character (&#8203) after each forward slash in the url. The browser will happily wrap the text on those invisible spaces. This can be done in javascript something like so:

url = url.split('/').join('/&#8203')

Just make sure you only add this to the _visible_ portion of the text, not the actual href attribute.

It works like a charm, breaking text after each / character when needed.

*note* this technique does not work out of the box with a wordpress site like this one, as wordpress mangles/processes the urls when rendering the page, attempting to encode the ampersand character in the url.

Ubuntu on the desktop – my experience

Tags

Approximately three months ago, I decided to take the dive and run Ubuntu as my primary desktop. I did it as an experiment, but have really quite liked the experience and I don’t expect to move back to windows, at least for my regular day-to-day use. I’ll likely keep a virtual instance of windows available for the times when I can’t get a windows program to run correctly on Ubuntu, but so far I haven’t missed windows at all.

Don’t get me wrong – it’s not been a perfect experience. But I’m an experienced software developer with a reasonable amount of Linux knowledge, so when faced with problems I had the tools to figure things out. That being said, I think for a lot of folks Ubuntu would be a really great alternative. So much of our computer usage these days is Web-based, and the modern browsers these days provide a really stable cross-platform environment for virtually all popular websites and needs. For those times when a windows program is your only alternative (or you just want to check something out), the ‘WINE’ windows compatibility layer does a remarkable job of getting a LOT of windows programs running natively on linux/Ubuntu.

One thing I quite like about the Ubuntu experience is the Unity desktop/launcher – it has some great easy-to-use features, such as multiple desktops, and easy task switching with previews. When I’m doing web development, it’s not unusual for me to have 10 or more windows open at the same time, so those features really help me organize my workspace.

I still occasionally find myself ‘searching’ for the right way to accomplish some minor task (like restoring a minimized window), but I recently found this great ‘cheat sheet’ for Ubuntu which I highly recommend Ubuntu users to review and experiment with the features highlighted. Here’s a direct link to the document – I couldn’t find a link to the document on the author’s blog or I’d have sent you to his blog posting directly…

 

New blog platform

Well, I finally gave in and updated the old blog to WordPress. I was able to export the old blog posts into WordPress, but it did require a fair bit of editing of things like post dates and statuses (draft, published, etc). It also did not export comments. Seeing as I only had a few comments :), I wound up adding those by hand, which didn’t update the date/timestamps. Seeing as I’ve already taken down the old blog, it’s kind of tricky to figure out the old datetimestamps for those comments…

The biggest remaining issue is that not all of the URLs and slugs match the old posts perfectly. Many of them are fine after tweaking the permalink settings in wordpress to match the old blog format, but wordpress has renamed some of the article name/slugs , and resetting those looks like a manual process…

ubuntu printer install

I got a new Lexmark Pro715 printer yesterday, but had some problems installing it in ubuntu. I finally got it working and thought I’d drop a note here for future reference.

tl;rd version

install the printer utility from support.lexmark.com, don’t bother looking for printer drivers. After install, search for ‘lexmark’ in the dashboard gui, as the command-line install does not indicate how to run the utility. After install you must:

sudo chown root /usr/lib/cups/backend; sudo chown root /usr/lib/cups/filter

Details

Initially I was confused about what printer driver I needed to download. It seems that there are ‘printer drivers’ listed for red hat/Suse linux editions, but only ‘printer utility’ and ‘scanner drivers’ available for ubuntu. All of the documentation I could find indicated that I needed the generic debian driver (seemed the same as Suse, except packaged as .deb).

I successfully installed the printer utility (sudo dpkg -i filename) in case that included the drivers, but there was nothing to indicate what binary to run after the install. running a few of the candidates on the command line led to cryptic error messages. It turns out that you need to search for the utilty in the ubuntu dashboard, and execute from there. After the setup wizard completes, it will have installed the printer for you, and it will be available in the list of printers.

Printing failed when I first tried printing the test page, with a cups-insecure-filter error. I solved this by:

$ sudo chown root /usr/lib/cups/backend
$ sudo chown root /usr/lib/cups/filter

All is working fine now!

Dreamhost Trac misconfiguration – how to get authentication working for Trac on Dreamhost

Dreamhost is a great hosting company, and provides a lot of very nice ‘one click installs’ of common software packages. I sometimes use Trac (http://trac.edgewall.org/) for managing hobby development projects, and the Dreamhost one click install worked great, except when it came to setting up authentication (requiring login).

It is simple enough to set up .htaccess and .htpasswd files based on the Trac documenation, but authorization fails for all javascript, css and related files (prompting the user with multiple login dialogs). After much searching about, I found the solution to the problem here: http://discussion.dreamhost.com/thread-124412.html

Simply put: the installer misconfigures the trac.ini file for the htdocs_location setting. Rather than using a relative path, it uses an absolute path with full domain name, which causes issues with the authentication configuration when using .htaccess files.

The solution is to change:

htdocs_location = http://www.yourdomain.com/trac/htdocs

to:

htdocs_location = /trac/htdocs

of course substitue the correct/actual path to the htdocs folder if you chose a custom name/path for your Trac install.

Works like a charm!

Comet or Long Loop message pattern – put an end to polling!

A while back I posted a possible solution to dealing with long running processes in a web application. While that solution works for very basic processes, the use of threading in an asp.net application can be the cause of a lot of grief (there are just too many ways outside of your control for those threads to be aborted prematurely).

I did a little research and came up with a MUCH better solution – simply execute the ajax request for the long running process, and then listen for messages on another ajax request. The key to this working in IIS/.NET, however is to ensure that your long running process is a SESSIONLESS request, otherwise your request will block further

ajax requests until it’s completed. Continue reading »