top of page

The Development of Scalable Function Graphics

 

 

 

The above sequence of images are rendered bivariate, vector-valued function graphics that have been produced without a learning process. Instead the constituent functions, mathematical operators, variables and constants have been generated and combined into a single function randomly. Discontinuous functions such as tan, min, max, abs, random, and if-then-else were also included, facilitating infinite sharpness and granularity. The fact that these functions are discontinuous means that they will produce the same sharp characteristics no-matter what resolution the image is displayed at (so long as it is stored as a function graphic). The initial intention was that this behaviour would serve as a means of preserving sharp edges in photographic images when they are scaled up, however it quickly became apparent that discontinuous functions directly impeded the learning process.

 

For machine learning algorithms to work, such as genetic algorithms which simulate the process of evolution (by mutating, evaluating and selecting solutions), relatively small learning steps need to be taken which gradually build up. However using discontinuous functions meant that even small mutations in the function could result in dramatic changes to the resulting image. For example, the output of tan(89.9) is dramatically different to tan(90.1). This obstructs subtle but beneficial mutations from accumulating effectively, with one drastic mutation very likely to be disadvantageous as opposed to beneficial and ruining any progress that may have been made so far. Additionally, almost any mutation to any constituent function, operator or variable (as opposed to constants) often resulted in relatively drastic effects. For example, changing 100+100 to 100-100, or max(0,x) to min(0,x), or y to x. Mutations to mathematical constants on the other hand can be controllably subtle, so long as they are not contained within discontinuous functions (for example changing 50 to 50.00001).

 

Fortunately, limiting the learning process to the manipulation of constants also allows for a more efficient learning algorithm to be used. Instead of introducing random mutations and then retrospectively selecting for the beneficial ones, a 'backpropogation' algorithm is capable of introducing alterations that are far more likely to be beneficial. This is done by calculating the difference between the actual output and desired output of the function, and then altering the constants slightly up or down based on the positive or negative error value.

 

Significant modifications have, however, needed to be made to the standard backpropogation algorithm to make it fit for the purpose of Scalable Function Graphic conversion. Additionally, to solve the problem within a tractable time frame the original image needs to be divided into a series of smaller tiles. A different mathematical function can then be trained to model each tile. This is necessary as the ability of even an extremely large continuous function to model the complexity of an entire photographic image appears to diminish exponentially the larger the original image becomes.

 

For comparison here are the results of four different scaling techniques (including a Scalable Function Graphic) applied to a 20 by 20 tile within the wiki commons image, 'Feathered Dusk'. by Jessie Eastland. From left to right, 'nearest neighbour algorithm', 'bicubic algorithm', 'fractal algorithm', 'SFG(sharp edge mode)'. If you would like to see the length and complexity of the vector-valued function required for the SFG to produce such a high quality rendering (for just this one tile) click here.

 

 

bottom of page