Quantcast
Channel: GLSL / Shaders - Processing 2.x and 3.x Forum
Viewing all 212 articles
Browse latest View live

OpenGL / Performance

$
0
0

Comparing my home and my office computer I womdering the there is no difference. At home I use an Intel I5 with HD Graphics 530. My Office PC use a Nvidia K1200. But comparing a few of my programs there is no differenc. What is the best way for performance measurement using processing?


Shader problem with two textures

$
0
0

Hello, I try to mix two images with shader, that work but when I want display the image of origine in a same time, the texturing one is always under shader effect, i don't know if it's understable. So I write a little code to make that more understable. plus a link on github too

Processing

PImage src_1 ;
PImage src_2 ;
PShader rope_shader_overlay;
void setup() {
  size(100,100,P2D);
  src_1 = new PImage(width,height);
  src_2 = new PImage(width,height);
  rope_shader_overlay = loadShader("rope_frag_overlay.glsl");

  change_pixel();
}

void draw() {
  background(0);
    change_pixel();
  PImage work = new PImage(src_2.width, src_2.height);
  // work.pixels = src_2.pixels ;
  for(int i = 0 ; i < work.pixels.length ;i++) {
    work.pixels[i] = src_2.pixels[i];
  }

 PImage display = overlay(work, src_1, .5,.6,.3,1);

  // image(display, 10,10);
  image(display,0,0);
  image(src_1, -width + width/3, 0);
  image(src_2, width -width/3, 0); // this image have a same appearence than shader.
}


PImage overlay(PImage tex, PImage inc, float... ratio) {
  shader(rope_shader_overlay);
  rope_shader_overlay.set("texture",tex);
  rope_shader_overlay.set("incrustation",inc);
  rope_shader_overlay.set("ratio",ratio[0],ratio[1],ratio[2],ratio[3]);
  return tex;
}

void change_pixel() {
  src_1.loadPixels() ;
  src_2.loadPixels() ;
  for(int i = 0 ; i < src_1.pixels.length ; i++) {
    src_1.pixels [i] = color(random(255),random(255),random(255));
    src_2.pixels [i] = color(abs(sin(frameCount *.01)) *255,abs(sin(frameCount *.02)) *255,abs(sin(frameCount *.001)) *255);
  }
  src_1.updatePixels();
  src_2.updatePixels();
}

GLSL

/**
ROPE - Romanesco processing environment –
* Copyleft (c) 2014-2017
* Stan le Punk > http://stanlepunk.xyz/
Shader mix from incrustation to texture
Overlay effect
v 0.0.1
*/
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif

#define PROCESSING_TEXTURE_SHADER

varying vec4 vertTexCoord;
uniform sampler2D texture;
uniform sampler2D incrustation;
uniform vec4 ratio;




void main() {
  vec4 a = vec4(texture2D(texture,vertTexCoord.st));
  vec4 b = vec4(texture2D(incrustation,vertTexCoord.st) *ratio);
  vec4 rgba = a+b;
  gl_FragColor = rgba;
}

How to send modelview matrix to vertex shader

$
0
0

Hello, I need some help to understand vertex shader in Processing.

What I want to do is just to change vertices coordinates in vertex shaders, such as in "www.vertexshaderart.com" . In order to do that:

  1. restore the vertices coordinates into the world coordinates like worldPos = modlviewInv * vertex

  2. move vertices as we like

  3. re-restore the vertices coordinates into the eye coordinates like newPos = projection * modelview * worldPos

According to the official explanations (https://github.com/processing/processing/wiki/Advanced-OpenGL), the modelview matrix in the vertex shader in Processing is an "IDENTITY" matrix for some performance issues, which means our vertex shader does not know the original modelview matrix, we need to send the unmodified modelview matrix from the processing side to the vertex shader explicitly.

I thought the code below might work, but I failed, which I do not know why:

Processing side:

PGraphicsOpenGL pg = (PGraphicsOpenGL)g;
shader.set("modelviewOriginal", pg.modelview);//send the modelview matrix to the uniform mat4 variable "modelviewOriginal"

vertex shader side:

vec4 worldPos = modelviewInv * vertex;//restore world coordinates
worldPos = vec4(worldPos.x + 100., worldPos.yzw);//for example, translate x pos
vec4 newPos = projection * modelviewOriginal * worldPos;//re-restore eye coordinates

Probably, I have been missing something basic stuff, I could not have figured them out so far...

So...., please help me !!!!!

Shader and process on GPU

$
0
0

Say I got a particle system with 200 particles which are checking the distance between each particle to draw lines according to the distance etc.

Say I want to have the same particlesystem but instead of using 200 particles I want to use 2000 particles. That would give me a terrible framerate. Even if Im using P2D or P3D.

I have been looking a little bit into shaders and wounder if it would be an better option to do the calculation on the GPU?

Would it be possible to write a shader that are doing all the calculation on the GPU instead?

Thanks

Warping texture with an other texture

$
0
0

I try to warp an image with an other image but I don't find the way to do it. The idea at the end it's use the a stable fluid to warp the image. Here a little example to show the problem : link sketch problem

link project in problem version

So any idea are very very welcome !

shader frag

/**
ROPE - Romanesco processing environment –
* Copyleft (c) 2014-2017
* Stan le Punk > http://stanlepunk.xyz/

Shader to warp texture

Render fluid
v 0.0.2
*/
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif

#define PROCESSING_TEXTURE_SHADER

#define PI 3.1415926535897932384626433832795

varying vec4 vertTexCoord;
uniform sampler2D texture;

uniform int mode;
uniform float roof_component_colour;

uniform sampler2D vel_texture;
uniform sampler2D dir_texture;
uniform vec2 grid_wh;

float map(float value, float start1, float stop1, float start2, float stop2) {
  float result = start2 + (stop2 - start2) * ((value - start1) / (stop1 - start1));
  return result;
}
vec2 cartesian_coord(float angle) {
  float x = cos(angle);
  float y = sin(angle);
  return vec2(x,y);
}

void main() {

  vec2 ratio = gl_FragCoord.xy / grid_wh;
  // vec2 ratio = vertTexCoord.st / grid_wh;

  vec4 vel = texture2D(vel_texture, ratio);
  vec4 dir = texture2D(dir_texture, ratio);


  // rendering picture ;
  if(mode == 0 ) {

    float angle_rad = map(dir.x, 0, roof_component_colour, -PI, PI);
    vec2 dir_cart = cartesian_coord(angle_rad) ;

    // float gap = max(grid_wh.x, grid_wh.y);
    // vec2 translate_pix = dir_cart.xy *vel.x / gap;
    vec2 translate_pix = dir_cart.xy *vel.x ;

    vec2 coord_dest = vertTexCoord.st +translate_pix ;
    vec4 tex_colour = texture2D(texture, coord_dest);

    gl_FragColor = tex_colour;
  }
  // velocity
  if(mode == 1 ) {
    gl_FragColor = texture2D(vel_texture, vertTexCoord.st);;
  }
  // direction force field
  if(mode == 2) {
    gl_FragColor = texture2D(dir_texture, vertTexCoord.st);;
  }

}

Processing

PImage tex_velocity, tex_direction ;
PShader warping;
PImage img ;
int grid_w, grid_h ;
void setup() {
  size(600,375,P2D);
  img = loadImage("pirate_small.jpg");
  grid_w = 60 ;
  grid_h = 37 ;
  tex_velocity = createImage(grid_w,grid_h,RGB);
  tex_direction = createImage(grid_w,grid_h,RGB);
  warping = loadShader("shader/warp/rope_warp_fluid.glsl");
  noise_img(tex_velocity, 20, .1, .1); // max translate for the pixel
    noise_img(tex_direction, 360, .1, .1); // degree direction
}

void draw() {
    if(frameCount%60 == 0) {
        noise_img(tex_velocity, 20, .1, .1); // max translate for the pixel
        noise_img(tex_direction, 360, .1, .1); // degree direction
    }

    warping.set("mode", 0) ;
    warping.set("texture",img);
    warping.set("roof_component_colour",g.colorModeX);
    warping.set("grid_wh",grid_w,grid_h);

  warping.set("vel_texture",tex_velocity);
  warping.set("dir_texture",tex_direction);
  shader(warping);

  image(img,0,0);
  resetShader();
  image(tex_velocity,5,5);
  image(tex_direction,grid_w +15 ,5 );
}


float x_offset, y_offset ;
void noise_img(PImage dst, int max, float ratio_x, float ratio_y) {
    noiseSeed((int)random(10000));
    for(int x = 0 ; x < dst.width ; x++) {
        x_offset += ratio_x ;
        for(int y = 0 ; y < dst.height ; y++) {
            y_offset += ratio_y ;
            float v = map(noise(x_offset,y_offset),0,1,0,max);
            v = (int)map(v,0,max,0,g.colorModeX);
            int c = color(v,v,v,g.colorModeA) ;
            dst.set(x,y,c);
        }
    }
}

Kinect + shader. how to mask

$
0
0

So, i´ve been tryng to find/understand/develop a way to mask the kinect figure with a shader, i mean putting a shader over the kinect figure with no succes.

Anyone has an example or a tutorial that I should look to accoplish this?

Deformation of an .obj file.

$
0
0

Hi all, Just learning shaders. Is it possible to deform a .obj file using a shader (like it is being sucked into a blackhole).

Would this be in the vertex shader?

Thanks.

my first shader(solved)

$
0
0

Hello to all... I was trying to programming my first shader, what to do is simple: color all the red pixels. But I'm finding some problem, the code is executed, but nothing happens. so I guess I'm wrong about something

processing sketch:

PShader myShader;

void setup() {
  size(250, 250, P2D);
  noSmooth();
  myShader = loadShader("shader.glsl");
}

void draw() {
  shader(myShader);
}

shader sketch:

#define PROCESSING_COLOR_SHADER
vec4 f = vec4(1.,0.,0.,1.); // f will be used to store the color of the current fragment

void main(void) {
  gl_FragColor = f;
}

thank you very much to everyone for the support :D :D


glitch shader, it compiles but doesn't work!!

$
0
0

hi guys...i'm working on this glitch shader i've found on this website: https://www.shadertoy.com/view/ls3Xzf

now i am editing part of the code to make it compile by processing.

It actually compile but it doesn't work

processing code:

PShader glitch;
PImage img, glitchImg;

void setup() {
  size(540, 540, P3D);
  img=loadImage("pietroGogh.jpg");
  glitchImg=loadImage("glitch.jpg");
  glitch = loadShader("glitch2.glsl");
  stroke(255);
  background(0);
  glitch.set("iResolution", new PVector(800., 600., 0.0) );
}

void draw() {
  strokeWeight(1);
  //glitch.set("iGlobalTime", random(0, 60.0));
  glitch.set("iTime", millis());
  if (random(0.0, 1.0) < 0.4) {
    shader(glitch);
  }
  image(img, 0, 0);
}

glsl code:

uniform sampler2D texture;
uniform vec2 iResolution;
uniform float iTime;
//varying vec4 vertTexCoord;


float rand () {
    return fract(sin(iTime)*1e4);
}

void main()
{
    vec2 uv = gl_FragCoord.xy / iResolution.xy;

    vec2 uvR = uv;
    vec2 uvB = uv;

    uvR.x = uv.x * 1.0 - rand() * 0.02 * 0.8;
    uvB.y = uv.y * 1.0 + rand() * 0.02 * 0.8;

    //
    if(uv.y < rand() && uv.y > rand() -0.1 && sin(iTime) < 0.0)
    {
        uv.x = (uv + 0.02 * rand()).x;
    }

    vec4 c;
    c.r = texture(texture, uvR).r;
    c.g = texture(texture, uv).g;
    c.b = texture(texture, uvB).b;


    float scanline = sin( uv.y * 800.0 * rand())/30.0;
    c *= 1.0 - scanline;

    //vignette
    float vegDist = length(( 0.5 , 0.5 ) - uv);
    c *= 1.0 - vegDist * 0.6;

    gl_FragColor = c;
}

does anyone have any idea why this happens? sorry if there are trivial mistakes, but i'm quite new to shader language thank you all!!

Resources for learning glsl / shaders

$
0
0

Hi!

The latest month I have learned a lot about fragment shaders. I have been through a lot of web based sources like the really good resource ”The Book of Shaders” and different webbased articles and tutorials. But most of them are about fragment shaders. I want to continue to develop my skills in shaders and learn more about compute shaders etc.

I want to learn more about develop particle systems. I also want to learn how to apply different techniques and algoritms such as game of life, abstraction diffusion, building fractals, flow fields, video processing etc.

I mainly using processing and Touchdesigner for my shaders.

Does anyone have some good tips of where to go next? Books, webbased tutorials etc. What resources can you recommend to learn and improve skills in glsl shaders?

Thanks a lot

Frosted Glass (Blurry Glass) on Processing

$
0
0

Hello

I'm looking for a way to code a Frosted Glass surface on top of a P3D render in processing. I guess it will use some glsl shader and I've played a bit with them but didn't find a way to constrain the shader to a surface.

float alpha = 0;

void setup() {
  size(640, 360, P3D);
  rectMode(CENTER);
}

void draw() {
  background(255, 128, 128);

  pushMatrix();
  fill(255);
  translate(width/2, height/2, -30);
  rotateY(alpha);
  rotateX(alpha);
  box(150);
  popMatrix();

  pushMatrix();
  fill(255, 64);
  translate(width/2, height/2, 120);
  rect(0, 0, 200, 100);
  popMatrix();

  alpha+= 0.01;
}

what do you think would be the best way to make this rectangle blur what is behind it ?

cheers

"non flat" shading of an OBJ file?

$
0
0

Hello, I loaded an obj (with it's MTL, texture etc.) using loadShape. Anyway, it's rendered as flat. Is there a simple way to display with "phong" or something like?

gpu fft/histogram with shader

$
0
0

hi guys...i'm working on this shader... it generate a real time fft/histogram of an audio/(image, video) input.

Now the problem is: it works but i've created it on shaderToy, i'm trying to import it in processing, but i've a little problem. i have no idea how shaderToys pass the audio/image input to the glsl texture function (iChannel0 ....). in other words: how do I do to create the appropriate input to generate the same effect I generate here: https://www.shadertoy.com/

To try it, you can copy and paste the following code into any of the examples that the site displays and then press the play button at the bottom left. then select the iChannel0 box select music and choose one of the proposed tracks.

void mainImage(out vec4 fragColor, in vec2 fragCoord) {

    vec2 uv = fragCoord.xy / iResolution.xy;

    vec2 res = floor(400.0*vec2(10.15, iResolution.y/iResolution.x));

    vec3 col = vec3(0.);

    vec2 iuv = floor( uv * res )/res;

    float fft = texture(iChannel0, vec2(iuv.x, 0.1)).x;
    fft *= fft;

    if(iuv.y<fft) {
        col = vec3(255.,255.,255.-iuv.y*255.);
    }

    fragColor = vec4(col/255.0, 1.0);
}

below the code for the implementation in processing:

import ddf.minim.*;
import com.thomasdiewald.pixelflow.java.DwPixelFlow;
import com.thomasdiewald.pixelflow.java.imageprocessing.DwShadertoy;

Minim minim;
AudioInput in;
DwPixelFlow context;
DwShadertoy toy;
PGraphics2D pg_src;

void settings() {
  size(1024, 820, P2D);
  smooth(0);
}
void setup() {
  surface.setResizable(true);

  minim = new Minim(this);
  in = minim.getLineIn();

  context = new DwPixelFlow(this);
  context.print();
  context.printGL();

  toy = new DwShadertoy(context, "fft.frag");
  pg_src = (PGraphics2D) createGraphics(width, height, P2D);

  pg_src.smooth(0);

  println(PGraphicsOpenGL.OPENGL_VENDOR);
  println(PGraphicsOpenGL.OPENGL_RENDERER);
}

void draw() {
  pg_src.beginDraw();
  pg_src.background(0);
  pg_src.stroke(255);
  //code to convert audio input to a correct input for the function :toy.set_iChannel(0, pg_src);
  pg_src.endDraw();

  toy.set_iChannel(0, pg_src);
  toy.apply(this.g);

  String txt_fps = String.format(getClass().getSimpleName()+ "   [size %d/%d]   [frame %d]   [fps %6.2f]", width, height, frameCount, frameRate);
  surface.setTitle(txt_fps);
}

and the glsl code for fft.frag file (is the same as before but I added the environment variables that shaderToys generates automatically and some other pixelFlow library instance to communicate with the fft.frag file):

#version 150

#define SAMPLER0 sampler2D // sampler2D, sampler3D, samplerCube
#define SAMPLER1 sampler2D // sampler2D, sampler3D, samplerCube
#define SAMPLER2 sampler2D // sampler2D, sampler3D, samplerCube
#define SAMPLER3 sampler2D // sampler2D, sampler3D, samplerCube

uniform SAMPLER0 iChannel0; // image/buffer/sound    Sampler for input textures 0
uniform SAMPLER1 iChannel1; // image/buffer/sound    Sampler for input textures 1
uniform SAMPLER2 iChannel2; // image/buffer/sound    Sampler for input textures 2
uniform SAMPLER3 iChannel3; // image/buffer/sound    Sampler for input textures 3

uniform vec3  iResolution;           // image/buffer          The viewport resolution (z is pixel aspect ratio, usually 1.0)
uniform float iTime;                 // image/sound/buffer    Current time in seconds
uniform float iTimeDelta;            // image/buffer          Time it takes to render a frame, in seconds
uniform int   iFrame;                // image/buffer          Current frame
uniform float iFrameRate;            // image/buffer          Number of frames rendered per second
uniform vec4  iMouse;                // image/buffer          xy = current pixel coords (if LMB is down). zw = click pixel
uniform vec4  iDate;                 // image/buffer/sound    Year, month, day, time in seconds in .xyzw
uniform float iSampleRate;           // image/buffer/sound    The sound sample rate (typically 44100)
uniform float iChannelTime[4];       // image/buffer          Time for channel (if video or sound), in seconds
uniform vec3  iChannelResolution[4]; // image/buffer/sound    Input texture resolution for each channel

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = fragCoord.xy / iResolution.xy;
    vec2 res = floor( 1000.0*vec2(1.0, iResolution.y/iResolution.x) );

    vec3 col = vec3(0.);

    vec2 iuv = floor( uv * res )/res;

    float f = 1.11-abs(fract(uv.x * res.x));
    float g = 1.11-abs(fract(uv.y * res.y));

    float fft = texture(iChannel0, vec2(iuv.x, 0.2)).x;
    fft = 1.*fft*fft;

    if(iuv.y<fft) {
        col = vec3(74.0,82.0,4.0);
    }

    fragColor = vec4(col/255.0, 1.0);
}

How to port Shadertoy multipass GLSL shader

$
0
0

Hi,

I'm trying to port these Shadertoy fragment shaders to Processing using PShader: https://www.shadertoy.com/view/XsG3z1#

I'm not sure I understood how to correctly do the multipass.

Here's my attempt so far:

BufA.frag:

// Reaction-diffusion pass.
//
// Here's a really short, non technical explanation:
//
// To begin, sprinkle the buffer with some initial noise on the first few frames (Sometimes, the
// first frame gets skipped, so you do a few more).
//
// During the buffer loop pass, determine the reaction diffusion value using a combination of the
// value stored in the buffer's "X" channel, and a the blurred value - stored in the "Y" channel
// (You can see how that's done in the code below). Blur the value from the "X" channel (the old
// reaction diffusion value) and store it in "Y", then store the new (reaction diffusion) value
// in "X." Display either the "X" value  or "Y" buffer value in the "Image" tab, add some window
// dressing, then repeat the process. Simple... Slightly confusing when I try to explain it, but
// trust me, it's simple. :)
//
// Anyway, for a more sophisticated explanation, here are a couple of references below:
//
// Reaction-Diffusion by the Gray-Scott Model - http://www.karlsims.com/rd.html
// Reaction-Diffusion Tutorial - http://www.karlsims.com/rd.html

uniform vec2 resolution;
uniform float time;
uniform int frame;

uniform sampler2D iChannel0;


// Cheap vec3 to vec3 hash. Works well enough, but there are other ways.
vec3 hash33(in vec2 p){
    float n = sin(dot(p, vec2(41, 289)));
    return fract(vec3(2097152, 262144, 32768)*n);
}

// Serves no other purpose than to save having to write this out all the time. I could write a
// "define," but I'm pretty sure this'll be inlined.
vec4 tx(in vec2 p){ return texture2D(iChannel0, p); }

// Weighted blur function. Pretty standard.
float blur(in vec2 p){

    // Used to move to adjoining pixels. - uv + vec2(-1, 1)*px, uv + vec2(1, 0)*px, etc.
    vec3 e = vec3(1, 0, -1);
    vec2 px = 1./resolution.xy;

    // Weighted 3x3 blur, or a cheap and nasty Gaussian blur approximation.
    float res = 0.0;
    // Four corners. Those receive the least weight.
    res += tx(p + e.xx*px ).x + tx(p + e.xz*px ).x + tx(p + e.zx*px ).x + tx(p + e.zz*px ).x;
    // Four sides, which are given a little more weight.
    res += (tx(p + e.xy*px ).x + tx(p + e.yx*px ).x + tx(p + e.yz*px ).x + tx(p + e.zy*px ).x)*2.;
    // The center pixel, which we're giving the most weight to, as you'd expect.
    res += tx(p + e.yy*px ).x*4.;
    // Normalizing.
    return res/16.;

}

// The reaction diffusion loop.
//
void main(){


    vec2 uv = gl_FragCoord.xy/resolution.xy; // Screen coordinates. Range: [0, 1]
    // vec2 uv = (gl_FragCoord.xy * 2.0 - resolution.xy) / resolution.y;
    vec2 pw = 1./resolution.xy; // Relative pixel width. Used for neighboring pixels, etc.


    // The blurred pixel. This is the result that's used in the "Image" tab. It's also reused
    // in the next frame in the reaction diffusion process (see below).
    float avgReactDiff = blur(uv);


    // The noise value. Because the result is blurred, we can get away with plain old static noise.
    // However, smooth noise, and various kinds of noise textures will work, too.
    vec3 noise = hash33(uv + vec2(53, 43)*time)*.6 + .2;

    // Used to move to adjoining pixels. - uv + vec2(-1, 1)*px, uv + vec2(1, 0)*px, etc.
    vec3 e = vec3(1, 0, -1);

    // Gradient epsilon value. The "1.5" figure was trial and error, but was based on the 3x3 blur radius.
    vec2 pwr = pw*1.5;

    // Use the blurred pixels (stored in the Y-Channel) to obtain the gradient. I haven't put too much
    // thought into this, but the gradient of a pixel on a blurred pixel grid (average neighbors), would
    // be analogous to a Laplacian operator on a 2D discreet grid. Laplacians tend to be used to describe
    // chemical flow, so... Sounds good, anyway. :)
    //
    // Seriously, though, take a look at the formula for the reacion-diffusion process, and you'll see
    // that the following few lines are simply putting it into effect.

    // Gradient of the blurred pixels from the previous frame.
    vec2 lap = vec2(tx(uv + e.xy*pwr).y - tx(uv - e.xy*pwr).y, tx(uv + e.yx*pwr).y - tx(uv - e.yx*pwr).y);//

    // Add some diffusive expansion, scaled down to the order of a pixel width.
    uv = uv + lap*pw*3.0;

    // Stochastic decay. Ie: A differention equation, influenced by noise.
    // You need the decay, otherwise things would keep increasing, which in this case means a white screen.
    float newReactDiff = tx(uv).x + (noise.z - 0.5)*0.0025 - 0.002;

    // Reaction-diffusion.
    newReactDiff += dot(tx(uv + (noise.xy-0.5)*pw).xy, vec2(1, -1))*0.145;


    // Storing the reaction diffusion value in the X channel, and avgReactDiff (the blurred pixel value)
    // in the Y channel. However, for the first few frames, we add some noise. Normally, one frame would
    // be enough, but for some weird reason, it doesn't always get stored on the very first frame.
    if(frame > 9) gl_FragColor.xy = clamp(vec2(newReactDiff, avgReactDiff/.98), 0., 1.);
    else gl_FragColor = vec4(noise, 1.);

}

shader.frag:

// Reaction Diffusion - 2 Pass
// https://www.shadertoy.com/view/XsG3z1#

/*
    Reaction Diffusion - 2 Pass
    ---------------------------

    Simple 2 pass reaction-diffusion, based off of "Flexi's" reaction-diffusion examples.
    It takes about ten seconds to reach an equilibrium of sorts, and in the order of a
    minute longer for the colors to really settle in.

    I'm really thankful for the examples Flexi has been putting up lately. From what I
    understand, he's used to showing his work to a lot more people on much bigger screens,
    so his code's pretty reliable. Reaction-diffusion examples are temperamental. Change
    one figure by a minute fraction, and your image can disappear. That's why it was really
    nice to have a working example to refer to.

    Anyway, I've done things a little differently, but in essense, this is just a rehash
    of Flexi's "Expansive Reaction-Diffusion" example. I've stripped this one down to the
    basics, so hopefully, it'll be a little easier to take in than the multitab version.

    There are no outside textures, and everything is stored in the A-Buffer. I was
    originally going to simplify things even more and do a plain old, greyscale version,
    but figured I'd better at least try to pretty it up, so I added color and some very
    basic highlighting. I'll put up a more sophisticated version at a later date.

    By the way, for anyone who doesn't want to be weighed down with extras, I've provided
    a simpler "Image" tab version below.

    One more thing. Even though I consider it conceptually impossible, it wouldn't surprise
    me at all if someone, like Fabrice, produces a single pass, two tweet version. :)

    Based on:

    // Gorgeous, more sophisticated example:
    Expansive Reaction-Diffusion - Flexi
    https://www.shadertoy.com/view/4dcGW2

    // A different kind of diffusion example. Really cool.
    Gray-Scott diffusion - knighty
    https://www.shadertoy.com/view/MdVGRh


*/

uniform sampler2D iChannel0;
uniform vec2 resolution;
uniform float time;

/*
// Ultra simple version, minus the window dressing.
void main(){

    gl_FragColor = 1. - texture2D(iChannel0, gl_FragCoord.xy/resolution.xy).wyyw + (time * 0.);

}
//*/


//*
void main(){


    // The screen coordinates.
    vec2 uv = gl_FragCoord.xy/resolution.xy;
    // vec2 uv = (gl_FragCoord.xy * 2.0 - resolution.xy) / resolution.y;

    // Read in the blurred pixel value. There's no rule that says you can't read in the
    // value in the "X" channel, but blurred stuff is easier to bump, that's all.
    float c = 1. - texture2D(iChannel0, uv).y;
    // Reading in the same at a slightly offsetted position. The difference between
    // "c2" and "c" is used to provide the highlighting.
    float c2 = 1. - texture2D(iChannel0, uv + .5/resolution.xy).y;


    // Color the pixel by mixing two colors in a sinusoidal kind of pattern.
    //
    float pattern = -cos(uv.x*0.75*3.14159-0.9)*cos(uv.y*1.5*3.14159-0.75)*0.5 + 0.5;
    //
    // Blue and gold, for an abstract sky over a... wheat field look. Very artsy. :)
    vec3 col = vec3(c*1.5, pow(c, 2.25), pow(c, 6.));
    col = mix(col, col.zyx, clamp(pattern-.2, 0., 1.) );

    // Extra color variations.
    //vec3 col = mix(vec3(c*1.2, pow(c, 8.), pow(c, 2.)), vec3(c*1.3, pow(c, 2.), pow(c, 10.)), pattern );
    //vec3 col = mix(vec3(c*1.3, c*c, pow(c, 10.)), vec3(c*c*c, c*sqrt(c), c), pattern );

    // Adding the highlighting. Not as nice as bump mapping, but still pretty effective.
    col += vec3(.6, .85, 1.)*max(c2*c2 - c*c, 0.)*12.;

    // Apply a vignette and increase the brightness for that fake spotlight effect.
    col *= pow( 16.0*uv.x*uv.y*(1.0-uv.x)*(1.0-uv.y) , .125)*1.15;

    // Fade in for the first few seconds.
    col *= smoothstep(0., 1., time/2.);

    // Done.
    gl_FragColor = vec4(min(col, 1.), 1.);

}
//*/

and the sketch:

//Reaction Diffusion - 2 Pass
// https://www.shadertoy.com/view/XsG3z1

PShader bufA,shader;

void setup(){
  size(640,480,P2D);
  noStroke();

  bufA = loadShader("BufA.frag");
  bufA.set("resolution",(float)width,(float)height);
  bufA.set("time",0.0);

  shader = loadShader("shader.frag");
  shader.set("resolution",(float)width,(float)height);
}
void draw(){
  bufA.set("iChannel0",get());
  bufA.set("time",frameCount * .1);
  bufA.set("frame",frameCount);

  shader(bufA);
  background(0);
  rect(0,0,width,height);

  //2nd pass
  //resetShader();
  shader.set("iChannel0",get());
  shader.set("time",frameCount * .1);
  shader(shader);
  rect(0,0,width,height);
}

The shaders compile and run, but the output is different from what I see on shadertoy: The Processing version gets stable quite fast and it doesn't look like the feedback works.

Access PGraphics prevent SwapBuffers

$
0
0

Just a personal experiment, no rush with reply.

I would like to access PGraphics and disable SwapBuffers in Processing. The goal is to write my own draw function, coding in JOGL i never know, what's to proper way to manage my JOGL call stack.

good chances that i can manage it on my own, the simpler version related to that question:
What is the proper way to access the default PGraphics. I know their some Java guys out here knows the answer.
Don't let me search in the forum, the answers is somewhere out their, - i know.

In case you have some JOGL knowledge:

Or is it so that the FPSAnimator is bound to the draw function?

Thanks for the reply.

*.pde to play with

// should be contentiously red
import  com.jogamp.opengl.GL4;
import com.jogamp.opengl.util.GLBuffers;
void settings() {
  size(640, 360, P3D);
  PJOGL.profile = 4;
}
void setup() {
  GL4  gl4 = ((PJOGL)beginPGL()).gl.getGL4();
  gl4.glClearBufferfv(GL4.GL_COLOR, 0, GLBuffers.newDirectFloatBuffer(4).put(0, 1f).put(1, 0f).put(2, 0f).put(3, 1f));
  endPGL();
  frameRate(1);
}

Trying to Make an Eye of Providence Application

$
0
0

Hi guys! I'm currently working on a term project that requires I make a 3D application with the following requirements:

(1) Have at least 5 global variables;

(2) The program flow should have at least two loops and three conditional controls

(3) Have at least two customized functions (other than setup() and draw() – you need to write your OWN function);

(4) Have at least one set of Geometric Transformation (Translate, Scale, and Rotate);

(5) Have at least three user interactions (e.g. mouse tracking, right click, buttons);

(6) Import at least one third-party library;

(7) The 3D scene must contain at least one non-default, non- primitive 3D object (cannot be cube, sphere, cylinder, or cone) created using PShape with your own defined vertices. You can create your object by using non-default primitive shapes and geometric transformations, or create your own meshes within the 3D ordinate system by defining every vertex). Imported object from other applications are not allowed.

(8) At least one lighting source;

(9) At least one Shader (shading script – vertex shader and fragment shader);

(10) At least one texture applied to your customized object;

(11)For (7) to (10), there should be buttons (or keys) via which you can show different views of the object, for example, one key to rotate camera, one key to show object model mesh, one key to show shading effect, etc.

I'm trying to make an Eye of Providence wherein the eye will move inside the pyramid according to where the mouse is pointed, and change color when the spacebar is pressed, and reset with a mouse click so far this is what I've got so far; It's not much but I'm pretty stuck

   import peasy.*;
import peasy.org.apache.commons.math.*;
import peasy.org.apache.commons.math.geometry.*;


PeasyCam cam;
PImage img;
float myAngle= 0.0;
void setup(){
  size(600,600, P3D);
  cam = new PeasyCam(this, 500);
  img = loadImage("Pyramid_Texture.jpg");
}
void draw() {
  background(30, 0, 80);
  translate(width/2, height/2-150);
  rotateX(PI/2);
  rotateZ(myAngle);
  stroke(255);
  drawPyramid();
  myAngle = myAngle + 0.01;

}

  void drawPyramid() {
  // this creates a pyramid shape by defining
  // vertex points in 3D space (x, y, z)
  beginShape();
  texture(img);
    // triangle 1
    fill(#FFCBFE);
    vertex(-100, -100, -100);
    vertex( 100, -100, -100);
    vertex(   0,    0,  100);
  endShape(CLOSE);
  beginShape();
  texture (img);
    // triangle 2
    //fill(#015F21);
    vertex( 100, -100, -100);
    vertex( 100,  100, -100);
    vertex(   0,    0,  100);
  endShape(CLOSE);
  beginShape();
  texture(img);

    // triangle 3
   // fill(#8DDBDE);
    vertex( 100, 100, -100);
    vertex(-100, 100, -100);
    vertex(   0,   0,  100);
 endShape(CLOSE);
 beginShape();
 texture(img);
    // triangle 4
    //fill(#5D2D00);
    vertex(-100,  100, -100);
    vertex(-100, -100, -100);
    vertex(   0,    0,  100);
endShape(CLOSE);
beginShape();
texture(img);
    // pyramid base
    //fill(#9CA24D);
    vertex(-100, -100, -100);
    vertex( 100, -100, -100);
    vertex( 100, 100, -100);
    vertex(-100,  100, -100);
endShape(CLOSE);
  }

I know it's still very barebones but I'm trying to figure out how to get the pyramid to stay still while allowing a sphere to move inside it, and how to apply texture, shaders and lighting to the scene. Any help is greatly appreciated!

How align textures using GLSL for depth and RGB of kinect

$
0
0

Im working on a proyect that needs to cut the background and and show only the figure with RGB colors for this im using a kinect and use a video as background. This has to work on the raspberry PI. The only kinect library that works on the raspberry is open Kinect. Open kinect doesn´t have the mask function nor the skeleton function.

I´ve accoplish to create a mask that works pretty well on the raspberry using shaders thought the problem that im having is that the RGB and Depth textures are not aligned. I´ve heard that this is something that happends with the kinect 1414 and not with the kinect 1417, thought i´ve only got the 1414 model , so i have to make it work in here.

This is a reference pic of how the mask is working:

mascara

The thing in the background is the video, look how wrong the mask is.

Im testing it on windows, so im using kinect4win library, and i know kinect4win has the kinect.getMask() function that works perfectly but i need to run on the raspberry so that function is useless and so i have to make my own. Here is my code using kinect4win than then im going to translated openKinect :

PROCESSING :

import processing.video.*;
import kinect4WinSDK.Kinect;

import processing.video.*;

Movie movie;

Kinect kinect;


PGraphics canvas;
PShader shader;

// brightness threshold
float minBrightness = 0.8;


//Values tested on  kinect4windows
float scale = 0.9099997;
float xoffset = 0.08226191;
float yoffset = 0.08226191;

//values tested on openKinect for raspberry pi
/*float scale = 1.0799996;
float xoffset = -0.007174231;
float yoffset = -0.07838542;*/

void setup()
{
  fullScreen(P2D);
  //size(640, 480, P2D);
  frameRate(60);

  kinect = new Kinect(this);
  // load shader and set threshold
  shader = loadShader("mask.glsl");

  // Load and play the video in a loop
 /* movie = new Movie(this, "transit.mov");
  movie.loop();*/

  shader.set("xoffset", xoffset);
  shader.set("yoffset", yoffset);
  shader.set("scale", scale);
}

void draw()
{
  // reset screen
  background(0, 0);
  // visualise result
  shader.set("umbral", minBrightness);

  //0.05 PUEDE IR EH
  float xoffset = map(mouseX, 0, width, -0.1, 0.1);
  float yoffset = map(mouseY, 0, height, -0.1, 0.1);



  println("X Offset : ", xoffset);
  println("Y Offset : ", yoffset);


  fill(255, 0, 0);
  ellipse(mouseX, mouseY, 20, 20);

  /*fill(0);
  image(movie, 0, 0, width, height);
  fill(255);
  */

  shader.set("texture", kinect.GetDepth());
  shader.set("texture2", kinect.GetImage());

  shader(shader);
  image(kinect.GetDepth(), 0, 0, width, height);
  resetShader();

  fill(0, 255, 0);

  float sep = 20;
  text("FPS: " + frameRate, 10, height-sep);
  text("xoffset :" + xoffset, 10, height-sep*2);
  text("yoffset :" + xoffset, 10, height-sep*3);
  text("scale :" + scale, 10, height-sep*4);
}

void movieEvent(Movie m) {
  m.read();
}

void keyPressed() {

  if (key == '+') {

    minBrightness +=0.1;
  }
  if (key == '-') {

    minBrightness -=0.1;
  }

  if(key == 'a'){

   scale +=0.02;
  }
  if(key == 's'){
   scale -= 0.02;
  }
}

This is my GLSL CODE :

     /*#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif
*/
uniform sampler2D texture; //LA DEPTH

uniform sampler2D texture2; // LA RGB

varying vec4 vertColor;
varying vec4 vertTexCoord;

uniform float umbral;
uniform float xoffset;
uniform float yoffset;
uniform float scale;

void main() {
    vec3 luminanceVector = vec3(0.2125, 0.7154, 0.0721);
    vec4 c = texture2D(texture, vertTexCoord.st) * vertColor;


    //CORRECT DEPTH RGB ALIGMENT :

    vec4 c2 =  texture2D(texture2, (vertTexCoord.st+vec2(xoffset,yoffset))*scale) * vertColor;

    float luminance = dot(luminanceVector, c.xyz);
    luminance = max(0.0, luminance - umbral);


    c.r = c2.r;
    c.g = c2.g;
    c.b = c2.b;
    c.a = sign(luminance);

    gl_FragColor = vec4(c.rgb,c.a);
}

Im posting this on shaders category cause all that i need to do is translate the depth mask to be align to the RGB, but im not very good using shaders. so my question is . How to translate depth mask to align to rgb texture?

UPDATE : i´ve managed to set a Xoffset and Yoffset for aligning the textures, yet it doesn´t seem to be only a problem of aligment but also a problem of scale, like the depth image is a bit larger. Maybe some OpenCV algorithm for improving it?

UPDATE2: i´ve managed to set a SCALE, better, but still inacurate, i don´t know how to solve this now.

UPDATE3: SOLVE IT ! the scale was not really add it, now the code is fully functional

Abnormal Pixels at the edges of Jump Flood Algorithm voronoi graph.

Shaders Introduction

$
0
0

hey! im new in shaders world, and i asking if these examples are made with shaders. What do you think? in particular, in the first video, these particle-blur-dissolve effect. in the other video, the light effect that switch on and off. What way do you recommend to start shaders?

THANKYOU!

translate tutorial.

$
0
0

hey! does anybody help me to understand shaders better? Im reading all tutorials that i found, includes PShader reference processing page. For now, im trying to pass the data of vertex into a glsl vertex shader (im editing shaders in Atom) i found this max/msp tutorial and i cant figure it out how translate this to processing.

image

anybody helps me? thanks a lot. PD: if anybody knows any book o information about shaders an how implement in processing please share! thank

Viewing all 212 articles
Browse latest View live