Send data to a postprocessing shader

Hello! i have this shader and i use it in a post processing buffer that takes as a texture input a scene of differents meshes:
how can i set, for example, the uniform exposure to a new value for each mesh that i create? in other words, i need to pass to this buffer variable data. how can i do it?

    #version 150

    in vec2 varyingtexcoord;
    uniform sampler2DRect tex0;
    uniform vec3 ligthPos;
    uniform float exposure = 0.19;
    uniform float decay = 0.9;
    uniform float density = 2.0;
   uniform  float weight = 1.0;
   uniform  int samples = 25;

    out vec4 fragColor;
    const int MAX_SAMPLES = 100;
    void main()
        vec2 texCoord = varyingtexcoord;
        vec2 deltaTextCoord = texCoord - ligthPos.xy;
        deltaTextCoord *= 1.0 / float(samples) * density;
        vec4 color = texture(tex0, texCoord);
        float illuminationDecay = 1.0;
        for(int i=0; i < MAX_SAMPLES; i++)
            if(i == samples){
            texCoord -= deltaTextCoord;
            vec4 sample = texture(tex0, texCoord);
            sample *= illuminationDecay * weight;
            color += sample;
            illuminationDecay *= decay;
        fragColor = color * exposure;

I’m not sure I understand. exposure is a uniform of a post processing shader that operates on a full screen quad which displays your rendered scene, right? If so, it’s not possible to distinct between individual meshes anymore. The post processing effect is applied to the scene as a whole.

Ok, yes i understand that. But since my shader ( is a radialBlur) needs a ligthPos as a uniform to calculate the center of the blur my question is how can i think in a scenario where i have more than one object. in other words, how can i use this shader with many objects. i was reading this, but i dont know if it will be usefull:

27.2 Extracting Object Positions from the Depth Buffer

When an object is rendered and its depth values are written to the depth buffer, the values stored in the depth buffer are the interpolated z coordinates of the triangle divided by the interpolated w coordinates of the triangle after the three vertices of the triangles are transformed by the world-view-projection matrices. Using the depth buffer as a texture, we can extract the world-space positions of the objects that were rendered to the depth buffer by transforming the viewport position at that pixel by the inverse of the current view-projection matrix and then multiplying the result by the w component. We define the viewport position as the position of the pixel in viewport space—that is, the x and y components are in the range of -1 to 1 with the origin (0, 0) at the center of the screen; the depth stored at the depth buffer for that pixel becomes the z component, and the w component is set to 1.

We can show how this is achieved by defining the viewport-space position at a given pixel as H . Let M be the world-view-projection matrix and W be the world-space position at that pixel.

what i need is the position of the meshes that i draw in another buffer in screen coordinates, that i get using project and send it to the postprocessing as a uniform. but i dont know how can i pass multiples positions for multiples objects. any suggestion will be appreciate