DISCLAIMER: This page is dedicated for those who have a desire to understand the basics of Blink Scripting in nuke. I’m by no means an expert on this matter, however I realized that, by formalizing knowledge helps a lot. “If you want to master something, teach it"
Lets start by the beginning, What is BLINK SCRIPT? If you go to the the foundry nuke website link you will find some useful information, however you should first read the dedicated page for developers to get a proper introduction.
Blink scripting is basically a framework inside nuke that allows users to write code in a C++ fashion style to modify data and process it to generate an output result.
This page is inspired by the amazing page in nukepedia by Matt Estela and Pedro Andrade.
The Blink node in Nuke has three main areas:
Node : nuke node to interact in the DAG.
Kernel : where the main code resides and can be compiled. Text files with written code can be loaded an saved.
Parameters : where the ‘public' parameters are shown to modify the behaviour of the code. Options for the GPU and CPU. Publish the node and protect the kernel.
If you are reading this, probably you already know how a node works, so lets jump straight to look at the Kernel.
The simplest blink script can have up to seven main areas in the code, however not all of them will perform any task and can be deleted. For the sake of consistency lets say that we will add them just in case we want to add something there…
Lets take a look at the code.
I strongly recommend to use an external text editor to work with the code more comfortably. I’m using Sublime Text.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | kernel SimpleKernel : ImageComputationKernel<ePixelWise> { Image<eRead, eAccessPoint> src; Image<eWrite> dst; param: local: void define() { } void init() { } void process() { dst() = src(); } }; |
kernel SimpleKernel : ImageComputationKernel<ePixelWise>
In the first line we declare the Kernel itself (kernel iteration). The name and the way we will process the data. If you what no know a bit more about the different types of data access you can go to this link to get more information.
Image<eRead, eAccessPoint> src; Image<eWrite> dst;
The second and third lines declares the input and the output. The input (eRead) will be named src and the output (eWrite) will be named dst. eAccessPoint means that we will access the current position of the pixel, one and only one pixel.
param:
The forth line declares the parameters that will be ‘published' in the Kernel Parameters Tab to be tweaked.
local:
The fifth line declares the parameters that will be used internally and not shown to the user. You can call it ‘private variables'.
void define() { }
The sixth line declares a space where you can define the name and default values of the previously stablished parameters.
void init() { }
The seventh line declares a space where the first initial processing will occur. It will be called only once every time the image will be processed and will be called before the main code is performed.
void process() { dst() = src(); }
The eight line declares a space where the main code will be performed.
In this case we are saying that the output (dst) is the same as the input (src). This piece of code is literally a ‘do nothing'.
Now that we've defined the main areas of work, lets create a new script that we will call ‘Custom_Color'. We will delete every part that is not necessary and optimize it as much as possible. We can delete the spaces between lines, but I’ll leave them so we can read the code better.
One thing that we can do is to write some information next to the text to describe what is doing. This can help you to understand what you did and inform others to know what operations are you performing. If you want to write some text that will not be part of the code you can use the notation ‘ // ‘ before the text.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | kernel Custom_Color : ImageComputationKernel<ePixelWise> { Image<eWrite> dst; //output image param: float4 color; //parameter void define() { defineParam(color, "Custom color", float4(0.0f,1.0f,0.0f,0.0f)); //defalut value } void process() { dst() = color; //output } }; |
We've defined kernel and the the name as : Custom_Color.
We've defined a parameter called “color" as a float4. The Output is a float4 (RGBA), so we are defining the color as a float4. The definition of a parameters requires three arguments. The first calling the parameter, the second defining the name and the last the default values. For this case we defined the third value (green) with value 1. When we define a value as a float, we write an 'f' after the value with the floating point. ( 1.0f )
And finally, we've defined that the output (dst) is our ‘color’ value.
The node will look like this and naturally the output is a beautiful solid green.
Now, if you change the values in the knob, you will see the color changing. The knobs that we can create are similar if not the same as a regular knob in any other nuke node. However there is a limited types of knobs that we can create in blink.
As I mentioned before, there are a limited types of knobs that we can create in blink scripting inside the kernel. But, there is the option to create a custom knob in the normal way we could create them for a gizmo, and then linking them!!
Here is a list of some of the knob we can create in the kernel as an example:
bool : (boolean)
int : single intiger value
int multi_int[4] : (four intiger values)
float : (single float value)
float2 : (double float value)
float3 : (triple float value)
float4 : (four float values, this know will be displayed as a color knob)
float multi_float[5] : (this will display multiples float values specified in the ' [] ')
float3x3 : (matrix type with 3x3 values)
float4x4 : (matrix type with 4x4 values)
Lets try to combine what we have done so far. A dst = src and the creation of a color.
For that, we will define an input, and we will use a multiplication operation by our custom color.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | kernel Custom_ColorMult : ImageComputationKernel<ePixelWise> { Image<eRead,eAccessPoint,eEdgeClamped> src; //input image with edges clamped Image<eWrite> dst; //output image param: float4 color; //parameter void define() { defineParam(color, "Custom color", float4(0.0f,1.0f,0.0f,0.0f));//default value } void process() { dst() = src() * color; //input multiplied by our color } }; |
When you declare an input, and if there is nothing connected with the node, it will give you an error, so I created a checkerboard and connected it with the blink node. We have also defined with the notation 'eEdgeClamped' that declares that the pixels at the edge should be repeated to be accesible. We can write ‘eEdgeNone' to cut the pixels outside the drawn area.
The output as a result is naturally, the checkerboard multiplied by our beautiful solid green.
There is no reason why we used a multiplication mode. We could have used a ‘ + ’ (addition), or a ‘ - ‘ (subtraction) or even a ‘ / ‘ (division). A lot of different operations can be performed, the sky is the limit…
Lets now try something different. Lets say that we want to create a node that can choose between two outputs. Like a switch standard node.
For that we can create a ‘bool' (boolean operation), the return of this can be either True or False. True will display the input and False will display the custom color.
We will need some kind of code, that will operate with the bool knob and build a conditional, so when the user chooses a value the operation will be performed.
The conditional can be written like: Switch == true ? A : B
The conditional states that: IF the value Switch is true return A, IF its not (false) return B.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | kernel Custom_ColorSwitch : ImageComputationKernel<ePixelWise> { Image<eRead,eAccessPoint,eEdgeClamped> src; //input image with edges clamped Image<eWrite> dst; //output image param: bool Switch; //switch parameter float4 color; //color parameter void define() { defineParam(Switch,"Switch", true); //default value defineParam(color, "Custom color", float4(0.0f,1.0f,0.0f,0.0f));//default value } void process() { dst() = Switch == true ? src() : color; //conditional } }; |
Now, lets try to do something that can be actually useful for you as a compositor. We want to create a UV map from the input image size to be used as you wish.
For that we have several options, but we will see two:
Sample the width and the height of the input image.
Define the width and the height.
There are a couple o things to take into consideration. The first is that in blink, there are a bunch of ‘calls’ that we can use and take advantage. However if we want to use them, we have to understand how they work. If we use the Image<eRead,eAccessPoint> the way that nuke will try to compute is defined by the amount of height tiles that we specify in the Settings Tab. There is another way to do it, we can use Image<eRead,eAccessRandom>, this will force the node to access and compute any pixel at any location. So we are not restricted by this kind of ‘resolution‘ parameter.
By using this different accessing methods, it doesn’t mean that one will not work, however if you change it, you will see the results by yourself.
The math behind the UV map is very simple. If you want to create a ramp from 0 to 1 from a box, you have to calculate the relative position of every point depending on a specified length and map it to a defined value. This is called normalization 0 to 1. If we do a bit of math:
Width = 10
0 / 10 = 0
5 / 10 = 0.5
10 / 10 = 1
Lets look at this code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | kernel UVmap : ImageComputationKernel<ePixelWise> { Image<eRead,eAccessRandom, eEdgeClamped> src; //input image Image<eWrite> dst; //output image local: float width; //float value to store the width float height; //float value to store the height void init() { width = src.bounds.x2; //function to find right edge height = src.bounds.y2; //function to find top edge } void process(int2 pos) { dst(0) = pos.x / width; //pixel position X divided by width dst(1) = pos.y / height; //pixel position Y divided by height dst(2) = 0.0f; dst(3) = 0.0f; } }; |
First, we defined two local variables: Width and Height.
In the init space, we've sampled the bounds of the image : src.bounds.x2 and src.bounds.y2. Meaning that x1(o), x1(1), y1(0) and y2(1). We have defined the value of the top most pixel and the right most pixel (x2 and y2), in our case if we sample an image of 256 x 256, width = 256 and height = 256. Make sense no?
In the process space we added the notation (int2 pos). That means we can access the X and Y position of the pixel as an integer.
Something else that you can notice is that we have separated the dst() into four different values. This method allows us to specify a value for each one of the values.
dst( 0 ) = red channel
dst( 1 ) = green channel
dst( 2 ) = blue channel
dst( 3 ) = alpha channel
We know that the output image will be a float4 (4 channels R, G, B, A), so for that we can separate the dst() and control what is going in each channel !!
width = 256 and X pixel position = 0
pos.x / width = 0
width = 256 and X pixel position = 256
pos.x / width = 1
We get a normalized ramp from 0 to 1 in the horizontal axis. And we've done the same in the vertical Y axis.
If we change the input image dimensions we will get a normalized ramp in X and Y from the sampled width and height, since we are sampling from the source.
Lets try another method by specifying a custom width and height so we can define and control the ramp.
In this case we will delete the entry (src) for the image and define two float values as a public parameters, so we can control it. Lets look at the code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | kernel UVmap2 : ImageComputationKernel<ePixelWise> { Image<eWrite> dst; //output image param: float width; //float value to store the width float height; //float value to store the height void define() { defineParam(width,"width",256.0f); //width custom param defineParam(height,"height",256.0f); //height custom param } void process(int2 pos) { dst(0) = pos.x / width; //pixel position X divided by width dst(1) = pos.y / height; //pixel position Y divided by height dst(2) = 0.0f; //blue channel as 0.0f dst(3) = 0.0f; //alpha channel as 0.0f } }; |
Now, we have access to the Width and Height values. If you change the values you will see how the ramp changes.
One thing to mention is that we specified in the Kernel Parameters the format we are working on. In our case we set up a square 256 x 256, but you can change it.
Interesting to notice. When you define a public parameter and you specify a default value, it sets it up as the center point in the knob.
Something that is worth trying is to add a multiplication or division notation to the pos values with a custom param that we could create, so we can control both values. This will give you a deeper understanding of how this values are calculated together. As I said, just for fun…
Something else we can try is to create some code that allows us to apply some kind of transformation to the image. For that we will see how to use the bilinear interpolation function. This uses three parameters, the values to sample, and two float values to apply in the X and Y axis. The bilinear function is expecting a <eAccessRandom> from the sampled image.
Lets see the code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | kernel Transform : ImageComputationKernel<ePixelWise> { Image<eRead,eAccessRandom,eEdgeClamped> src; //input image Image<eWrite> dst; //output image param: float2 Offset; //Offset Parameter void define(){ defineParam(Offset,"Position",float2(128.0f,128.0f)); //Offset defaults } void process(int2 pos){ dst() = bilinear(src, pos.x-(Offset.x-128), pos.y-(Offset.y-128) ); } }; |
Here we can see that we've created a float2 called ‘Offset’ (with 2 components, X and Y), we've defined the defaults as 128, given the fact that we have a source image of 256 x 256, so we have the pointer at the center of the image.
The bilinear function is written like: bilinear( src, x, y ) where src is the sampled image, and x and y are 2 float values.
Then we've written the bilinear function with the Offset.x and Offset.y, then we've subtracted 128, to compensate the default values we've stablished.
If you move the knob values you will see how the image gets translated.
Something to notice is that, a pixel has a integer value, meaning that either is at coordinates 149 or 150 for example, however the bilinear function allows us to interpolate between pixels as floats 149.5 for example.
What about making it a bit more complicated, lets try to create a version of this code that allows us to control every channel individually. For that we will duplicate some lines and slightly modify the bilinear function.
When we want to call a single channel to be used with the bilinear function, we have to add the component we want. So at the end, we have to specify it.
bilinear(src, x, y, c) , Where c is the component(channel). 0 = Red , 1 = Green , 2 = Blue , 3 = Alpha.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | kernel TransformRGB : ImageComputationKernel<ePixelWise> { Image<eRead,eAccessRandom,eEdgeClamped> src; //input image Image<eWrite> dst; //output image param: float2 OffsetRed; //R Offset Parameter float2 OffsetGreen; //G Offset Parameter float2 OffsetBlue; //B Offset Parameter void define(){ defineParam(OffsetRed,"Position Red",float2(128.0f,128.0f)); //R Offset defaults defineParam(OffsetGreen,"Position Green",float2(128.0f,128.0f)); //G Offset defaults defineParam(OffsetBlue,"Position Blue",float2(128.0f,128.0f)); //B Offset defaults } void process(int2 pos){ dst(0) = bilinear(src, pos.x-(OffsetRed.x-128), pos.y-(OffsetRed.y-128), 0 ); dst(1) = bilinear(src, pos.x-(OffsetGreen.x-128), pos.y-(OffsetGreen.y-128), 1 ); dst(2) = bilinear(src, pos.x-(OffsetBlue.x-128), pos.y-(OffsetBlue.y-128), 2 );; dst(3) = 1.0f; } }; |
We've created three different ‘public' values so we can control every channel separately, if we had created just one we would be modifying all the channels at the same time with the same values so the effect would not be visible.
We have also given a value of 1 to the alpha. why not.
We are not only limited to translating values by adding or subtracting from the current pixel position, we can also multiply and divide this values, this can give us a different result, we can scale the image!
Lets do some more ‘math‘...
If we take a look at the previous UV map, and if we multiply the coordinates x0-y0 by 1 the result is obiously x0.y0, if we multiply the coordinates x1-y1 by 1 the result is 1x-y1, however if we multiply this time by 2, the image gets scaled, right..? well, lets see:
x0-y0 X 2 = x0-y0
x128-y128 X 2 = x256-y256
x256-y256 X 2 = x512-y512
The result is going to be an image scaled from the bottom left corner..
Lets look at the code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | kernel Scale : ImageComputationKernel<ePixelWise> { Image<eRead,eAccessRandom,eEdgeClamped> src; //input image Image<eWrite> dst; //output image param: float Scale; //Scale Parameter void define(){ defineParam(Scale,"Scale", 1.0f); //Scale defaults } void process(int2 pos){ dst() = bilinear(src, pos.x / Scale, pos.y / Scale); //bilinear function } }; |
We’ve created a float param called ‘Scale’ and set up the default as 1.0f
And we’ve called the bilinear function dividing the current position of our pixel by the Scale float. (We have to divide to get the inverse.. because we are scaling the world, hence the image is getting smaller if we use multiply).
If you change the value of the knob Scale you will see the image scaling up and down.
But… what if we want to Scale our image from the center… This is a bit more complicated, not that much but lets try to apply some logic.
Lets look at some code and try to make some sense of it.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | kernel ScaleCenter : ImageComputationKernel<ePixelWise> { Image<eRead,eAccessRandom,eEdgeClamped> src; //input image Image<eWrite> dst; //output image param: float Scale; //Scale Parameter local: float cx; //Horizontal center float cy; //Vertical center void define(){ defineParam(Scale,"Scale", 1.0f); //Offset defaults } void init(int2 pos){ cx = src.bounds.x2 / 2.0f; //X center from bounds cy = src.bounds.y2 / 2.0f; //Y center from bounds } void process(int2 pos){ dst() = bilinear(src, ((pos.x - cx) / Scale) + cx, ((pos.y - cy) / Scale) + cy); } }; |
What we have done is the following:
We’ve created two internal parameters called cx (center X) and cy (center Y). Then, in the init space we’ve calculated the half of this values giving us the middle point of both, basically we’ve found the center of the input image.
In the bilinear function, we’ve translated the entire image from the center to the coordinates x=0 and y=0, then we’ve done the same as before by dividing that by the scale param, and then, we’ve added the same translation to get the image where it was.
Here is a diagram of the operations.
We could also create a float2 (or two floats) to control the horizontal and vertical value. dst() = bilinear(src, ((pos.x - cx) / Scale.x) + cx, ((pos.y - cy) / Scale.y) + cy);
And yes, you could do this operation for each channel (RGBA).
Next step would be.. rotation!
Rotations are… a bit more complicated, however, to get rotations in 2d (uv) space there is a ‘formula‘ that we can use, its not very complicated, however we have to understand how it works. Its just simple trigonometry…
If we look at what wikipedia explains. We can scroll to the rotations in 2 Dimensions and what we can see is the following.
How can we make sense of this… 2d matrix rotations…
So, here is the formula with a language that we can actually understand:
X = pos.x * cos(Radiants) - pos.y * sin(Radiants) Y = pos.x * sin(Radiants) + pos.y * cos(Radiants)
Thanks to Erwan Leroy’s page, we have this version of the formula that we are going to use to get the rotations of our ‘2d matrix‘.
Something else that we have to take into account, is that this formula is expecting ‘Radiants’, so we have to translate it: Radiants = Angle * ( pi / 180 )
That will allow us to setup a parameter that goes from 0 to 360 and get proper rotations with the correct units.
Another thing to take into account, is that if we perform this operation, the rotation will be performed from the coordinates x0 - y0. Now we know how to move the uv space, perform an operation and then put it back again to the original position. Lets look at the code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | kernel Rotation : ImageComputationKernel<ePixelWise> { Image<eRead,eAccessRandom,eEdgeClamped> src; //input image Image<eWrite> dst; //output image param: float Angle; //Angle parameter local: float cx; //Horizontal center float cy; //Vertical center float4 out; //out vectors float Radiants; //Radiants float pi; //pi void define(){ defineParam(Angle,"Angle",180.0f); //Angle defaults } void init(int2 pos){ cx = src.bounds.x2 / 2.0f; //X center from bounds cy = src.bounds.y2 / 2.0f; //Y center from bounds pi = 3.14159265359f; //pi constant value Radiants = (Angle-180)*(pi/180.0f); //Radiants } void process(int2 pos){ //output vectors rotated (UV cordinates) out.x = ( (pos.x-cx) * cos(Radiants) ) - ( (pos.y-cy) * sin(Radiants) ) + cx; out.y = ( (pos.x-cx) * sin(Radiants) ) + ( (pos.y-cy) * cos(Radiants) ) + cy; dst(0) = bilinear(src, out.x, out.y,0); //bilinear function dst(1) = bilinear(src, out.x, out.y,1); //bilinear function dst(2) = bilinear(src, out.x, out.y,2); //bilinear function dst(3) = 1.0f; } }; |
Lets see the steps we’ve done:
We’ve created a parameter ‘Angle‘ and set the default value to 180.
We’ve defined local parameters: cx (center X), cy (center Y), a float4 out (placeholder to perform the operations), Radiants to perform the translation and the PI constant (set at 3.14159265359f).
We calculated the center of the image. CX and CY.
We defined the Radiants conversion with the previously seen formula and we’ve also subtracted 180 since the default value of our public param is 180, so it sets to 0 in reality.
Then, we’ve performed the 2d matrix rotation ‘formula‘ for each of the axis. Something to take into consideration is that when we call the X position and the Y position of the pixel, we subtract CX and CY respectively, that sets up the center of the uv map into x0 - y0 coordinates. After we perform the rotations, we add again the CX and CY to return the two float values to the original coordinates.
Finally we use the bilinear function as we did before.
Here is a diagram illustrating what we have done with this code.
For this next section, I’d like to introduce you to two new concepts.
The firs is really really simple. In some way we’ve already done it, but we haven’t pointed it. Every time he have performed an operation we have taken one value, lets say a float, and perform something with another float. (pos.x - cx) for example.
However blink allows us to simplify and operate float2 with another float2:
With this method we can work in 2Dimensions…
center = float2(src.bounds.x2 , src.bounds.y2);
position = float2(pos.x, pos.y);
position - center
And the second concept is the fact that there are multiple ways of achieving the same result. Specially when working with code, and code that can perform Trigonometry, Algebra and complex operations.
So the next code we will see, allows us to create a radial value or ‘circle‘ and we will see it written in two ways:
The first way uses again simple trigonometry to achieve the result with a sqrt() function. The reference we will use, comes from the amazing tutorial mentioned at the beginning of this page.
The second way uses some more trigonometry, we will see a way to sample vectors to get the length. We will use the reference of this also amazing tutorial by Matthew Rickshaw.
Lets see the first kernel example and take a look at the method used:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | kernel Radial : ImageComputationKernel<ePixelWise> { Image<eRead,eAccessRandom,eEdgeClamped> src; //input image Image<eWrite> dst; //output image param: float mult; //mult parameter local: float cx; //Horizontal center float cy; //Vertical center float xt; //center X axis float yt; //center Y axis void define(){ defineParam(mult,"mult",1.0f); //mult defaults } void init(int2 pos){ xt = src.bounds.x2; //bounds X yt = src.bounds.y2; //bounds Y cx = xt / 2.0f; //X center from bounds cy = yt / 2.0f; //Y center from bounds } void process(int2 pos){ //distance formula (sphere2D) // sqrt(r*r + g*g) = sqrt( pow(r,2) + pow(g,2) ) dst() = sqrt( pow( ( (pos.x-cx) / (xt/mult) ) ,2) + pow( ( (pos.y-cy) / (yt/mult) ),2) ); } }; |
For this method we’ve defined one public parameter, and four local parameters.
cx = horizontal center (src.bounds.x2 / 2.0f);
cy = vertical center (src.bounds.y2 / 2.0f);
xt = horizontal bounds (src.bounds.x2);
yt = vertical bounds (src.bounds.y2);
The formula we are going to use is the following: sqrt( x*x + y*y) or we can translate it to sqrt( pow( x,2 ) + pow( y,2 ) ). Since x*x is pow(x,2).
However we have to translate the center of the uv space to the middle point of our image. pos.x-cx and pos.y-cy.
Then we divide those values by the bounds and we add a multiplier to control the size of the divisor… simple… This allows us to control the radius of the ramp.
This method is giving us a ramp that goes from the center to the bounds.
Note that this value is not clamped, so if you change the knob mult value to a value of 2.0 for example, if you sample the value at the edges now, you will see values above one.
If we had defined a float2 for the mult parameter, we could control the width and the height of this radial ramp.
We could use this as a mask to multiply a source image to get a ‘vignetting effect‘ for example….
Now, lets see the second method. To me, this is more interesting and it introduces us to a new concept called ‘Length‘. The length of a vector is a value that can be stored as a float per pixel, that represents a correlation between values, for example, the distance of a pixel from the center of the image.
Also, would like to have a switch knob to make the ramp inverted. Easy, we know how to do that! booleans!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | kernel Radial_2 : ImageComputationKernel<ePixelWise> { Image<eRead,eAccessRandom,eEdgeClamped> src; //input image Image<eWrite> dst; //output image param: float gamma; //gamma param bool invert; //invert bool local: float xt; //width float yt; //height float2 position; //UV coordinates float2 direction; //normalized UVs float2 center; //center image float distance; //center distance from bounds float MaxDist; //vector from center to bounds float distanceNorm; //normalized distance float out; void define(){ defineParam(gamma,"gamma",1.0f); //gamma default defineParam(invert,"invert", false); //invert default } void init(int2 pos){ xt = src.bounds.x2; //bounds X yt = src.bounds.y2; //bounds Y center = float2(xt/2 , yt/2); //center image as float2 MaxDist = length(center); //vector from center to bounds } void process(int2 pos){ float2 position = float2 (pos.x, pos.y); //pixels positions X and Y float2 direction = position - center; //offset position / new coordinates float distance = length(direction); //distance of each pixel to bounds float distanceNorm = distance / MaxDist; //normalized distance float out = invert == true ? distanceNorm : 1.0f / distance; //invert conditional dst() = pow(out,gamma); } }; |
If you apply the kernel yourself to a blink node, you will see that the result is the same. However the way we got there is a bit different. Lets see how:
We defined two public parameters. Gamma and invert. A float(Gamma) to control a pow() function, and a bool (invert), to make the switch.
We’ve defined nine local parameters, lets see:
float xt = horizontal bounds
float yt = vertical bounds
float2 center = center of the image
float MaxDist = length of the max distance from center to edge
float2 position uv space
float2 direction = normalized uv space
float distance = length of the normalised vectors
float distanceNorm = normalised length of the distance
float out = placeholder for output
The steps of this method are the following:
We’ve defined the bounds (xt and yt). We define a float2 as the center. And we define the length of the Maximum Distance (MaxDist) that we can have, from the center to the top right corner. This last value can be used to normalize our values.
Then we’ve defined a float2 as a uv space(position) and we’ve applied an offset to get the x0-y0 coordinates at the center of the image (direction).
After, we’ve calculated the length of this new uv space (direction), getting a float value per pixel with the information of how ‘far’ is this pixel from the center. (distance)
If we take this value and divide it by the maximum length, we will get a normalized version of this value (distanceNorm).
However, we could take the the distance and do a one over distance ( 1 / distance ) to get the inverse (normalized from 1 to 0).
We’ve also created a conditional to switch between this to values, and finally, we’ve defined the dst() as a pow() of the switch and the gamma. The pow() function is generally used to apply a gamma correction curve.
This way we can control the overall gamma (or contrast…) of our radial ramp.
Here is a diagram to illustrate better the method used:
The last part of this entry will be FOR LOOPS.
What is a for loop in c++ and how does it looks like…
A for loop is a method used in programing that allows a task to be performed several times given a condition. Lets see the basic structure:
for ( int i = 0; i < value; i++ )
What this line of code is saying is very simple:
for() : This indicates that we are going to perform a task given a condition ‘for’ several times.
int i = 0 : This is establishing the initial value that is going to be checked in our conditional. (there is no need to be 0).
i > value : This is our conditional. We can establish a value to use in the conditional.
i++ : This is telling the loop, that if the condition is passed, it adds one to the value i, so it gets checked again with the conditional.
Cool, so now lets see a piece of code that we can use/understand. ( btw, thanks to Xavier Martin in this page to make things clear as sunbeams ).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | kernel For_Loop : ImageComputationKernel <ePixelWise> { Image<eRead,eAccessRandom,eEdgeClamped> src; //input image Image<eWrite> dst; //output image param: int increments; //increment param int amount; //amount param void define(){ defineParam(increments,"increments", 1); //increment default defineParam(amount,"amount",50); //amount default } void process(int2 pos) { int total = 0; //total initial value for (int i = 0; i < increments; i++) { //loop total += amount; //loop sum } dst() = src((pos.x + total), pos.y); //out } }; |
What is this kernel doing? It is performing an offset of the pos.x of our source image. However, it is checking the for loop and using this value to trigger the loop, lets see:
We’ve defined two public parameters, increments and amount. The increments is the number is it going to check in the conditional, and the amount is the number of pixels its going to use to offset.
In the process space, we’ve defined another parameter (private) total. Yes we can do that, we can define local parameters directly, we have to specify the type and the default value. This value is the total count of pixels that we will feed into the loop initially, and for every iteration its going to add this value to the total.
Now the loop. We’ve defined the value int i = 0, the conditional states that, if the value i is smaller than the value of iterations, we add one to the value of the loop operation, and we repeat the loop. Since the default value of iterations is 1, the loop will be executed once. Very often you will see a discrete value assigned here, like i < 200, if i = 0 its indicating that the loop will be performed 200 times.
Then, since the loop is going to be performed once, the total value = 0 at this point, the amount (50) value will be added. Basically, one times fifty = 50.
Finally we are calling the pos.x + total ( 50 ), so the output position that we are offsetting is going to be 50 pixels only once.
If we change the value of iterations to 2, and the amount to 25, we can see that since loop has been executed twice, the total amount of pixels will be two times twenty five = 50.
As a test, to understand the conditional, you can change the initial value of int i = 0, to 4 for example, and you will see that the offset will not occur until your iterations value is bigger than 4. After that the ‘count of iterations‘ will start at 1.
The images below, illustrates the fact that one iteration at 64 pixels offset is the same as two iterations of 32 pixels.
This specific loop is not particularly useful, but I think it illustrates the principles of for loops and triggers. And if you are wondering… if we could feed a value to the i value, yes, we can:
The trigger being a param value that we can specify and with this method, we can control when the loop will start.
for (int i = trigger; i < increments; i++) {
}
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | kernel For_Loop2 : ImageComputationKernel <ePixelWise> { Image<eRead,eAccessRandom,eEdgeClamped> src; //input image Image<eWrite> dst; //output image param: int trigger; int increments; //increment param int amount; //amount param void define(){ defineParam(trigger,"trigger",10); //loop trigger defineParam(increments,"increments", 2); //increment default defineParam(amount,"amount",100); //amount default } void process(int2 pos) { int total = 0; //total initial value for (int i = trigger; i =< increments; i++) { //loop total += amount; //loop sum } dst() = src((pos.x+total),pos.y); //out } }; |
The trigger is setup to 10, and the increments to 2. Since the notation that we are using fo i is: i <= increments, the loop won’t start unless the increments are equal or bigger than the trigger.
If you change the value of the increments to 10, you will see that the image moves only 100 pixels, not 10 x 100, that is because the loop has been performed only once!
The notations += and <= are what we call operators. You can check a bit more about them here. Operators are used constantly to perform a variety of …‘operations‘ with values.
For loops can be a bit confusing, I know, but they are a very powerful way of not having to write infinite amount of lines of code to do something, thats why they are powerful, we can write ‘ two hundred lines of code in just one line ’
You can find a git repository in this page with all the scripts I’ve used in in this page: https://github.com/GuillemRamisa/Blink_101
So, for now thats all for this entry, and if you manage somehow to get to the end, I think you deserve some pizza, infinite pizza at least! : )