文章出處

原文出處:SGI OpenGL 教程
翻譯:心藍 潘李亮。 Email: Xheartblue@etang.com 

譯者前言:
  影子有兩種經典的實現方法:一是Shadow Volume 。二是Shadow Mapping。如何用Light Mapping來實現投影影子呢?這就要用到Project Texture.直接翻譯的意思就是投影紋理----把一個紋理像放幻燈片一樣投影到場景中去,假想有一個電影機在放電影,沿著鏡頭方向,電影將被投在墻上,而投影紋理就類似于這種情況,我們想要使用的紋理就是電影機里Film。

  以下是我在SGI的教程中找到的文章。奉獻給大家。

How to Project a Texture
http://www.sgi.com/software/opengl/advanced98/notes/node49.html 


  投影一個紋理圖象到你自己合成的環境里在許多步驟上和投影渲染場景到顯示器上的步驟是一樣的。投影紋理的關鍵是紋理變換矩陣的內容。該變換矩陣由以下三個變換串聯起來:

    1.視圖/模型變換 -- 朝著需要投影到場景的方向。

    2.投影變換(透視或者正交)

    3.縮放和偏移(bias 是叫偏移嗎?),它將近裁剪面到紋理坐標。

  Projecting a texture image into your synthetic environment requires many of the same steps that are used to project the rendered scene onto the display. The key to projecting a texture is the contents of the texture transform matrix. The matrix contains the concatenation of three transformations: 

    1. A modelview transform to orient the projection in the scene. 

    2. A projective transform (perspective or orthogonal). 

    3. A scale and bias to map the near clipping plane to texture coordinates. 


  紋理變換中模型/視圖變換部分可以和普通的圖形管道一樣計算,可以使用一般的模型/視圖變換工具,舉個例子,你可以使用gluLookAt()來投射一個觀察方向,也可以用glFrustum或者gluPerspective()來定義一個透視變換。

  The modelview and projection parts of the texture transform can be computed in the same way, with the same tools that are used for the modelview and projection transform. For example, you can use gluLookat() to orient the projection, and glFrustum() or gluPerspective() to define a perspective transformation. 

  模型/視圖變換的作用和它在OpenGL 觀察管道里的作用是一樣的,它將觀察者沿-Z方向移動到原點和投影中心,在這種情況下,觀察者就好比是是光源,近裁剪平面好比是需要投影的紋理圖象的所處的位置,紋理圖象可以被看作一個在投影的透明膠片。或者,你也可以想象一個觀察者在觀察位置,透過近平面上紋理去看那些需要被貼上紋理(投影紋理)的表面。

  The modelview transform is used in the same way as it is in the OpenGL viewing pipeline, to move the viewer to the origin and the projection centered along the negative z axis. In this case, viewer can be thought of a light source, and the near clipping plane of the projection as the location of the texture image, which can be thought of as printed on a transparent film. Alternatively, you can conceptualize a viewer at the view location, looking through the texture on the near plane, at the surfaces to be textured. 

  投影操作把眼空間變換到規格化的設備空間,在這個空間里,x,y,z坐標的范圍都是-1到1。投影紋理則可以被想象成安放在投影方向的近平面上,這個投影就是由模型/視圖和投影變換矩陣定義的。

  The projection operation converts eye space into Normalized Device Coordinate (NDC) space. In this space, the x, y, and z coordinates range from -1 to 1. When used in the texture matrix, the coordinates are s, t, and r instead. The projected texture can be visualized as laying on the surface of the near plane of the oriented projection defined by the modelview and projection parts to the transform. 

  變換的最后一部分是對紋理映射進行縮放和偏移,它讓文理坐標的范圍變成0到1,這樣,整個紋理圖象(或者期望的區域)能覆蓋整個投影近平面。因為近平面被定義為NDC(規格化設備坐標)坐標。把NDC坐標下的近平面對應到紋理圖象上需要把s和t方向的坐標都縮小1/2,然后平移1/2。(注:[-1,1] * 1/2 + 1/2 = [0,1])。紋理圖象將居中而且覆蓋整個近平面(我一直沒有看懂Back Plane是什么)。紋理在投影圖象的方向被改變時候也可以旋轉。

  The final part of the transform scales and biases the texture map, which is defined in texture coordinates ranging from 0 to 1, so that the entire texture image (or the desired portion of the image) covers the near plane defined by the projection. Since the near plane is now defined in NDC coordinates, Mapping the NDC near plane to match the texture image would require scaling by 1/2, then biasing by 1/2, in both s and t. The texture image would be centered and cover the entire back plane. The texture could also be rotated if the orientation of the projected image needed to be changed. 

投影的次序和普通的圖形管道是一樣的。首先是Model/View變換,然后是投影變換,最后是縮放和平移近平面的位置到紋理圖象上去:

  1. glMatrixMode(GL_TEXTURE);

  2. glLoadIdentity();開始。

  3. glTranslatef(0.5f,0.5f,0.5f);

  4. glScalef(0.5f,0.5f,1.0f);

  5. 設置投影矩陣(如:glFrustum())

  6. 設置視圖/模型矩陣(如:gluLookAt())。

The projections are ordered in the same as the graphics pipeline, the modelview transform happens first, then the projection, then the scale and bias to position the near plane onto the texture image: 

  1. glMatrixModeGL_TEXTURE(GL_TEXTURE) 

  2. glLoadIdentity() (start over) 

  3. glTranslatef.5f, .5f, 0.f(.5f, .5f, 0.f) 

  4. glScalef.5f, .5f, 1.f(.5f, .5f, 1.f) (texture covers entire NDC near plane) 

  5. Set the perspective transform (e.g., glFrustum()). 

  6. Set the modelview transform (e.g., gluLookAt()). 


那么該如何定義圖元的紋理坐標的映射方式呢?因為我們的投影和視圖/模型變換是在眼空間中定義的(所有的場景都是在這個空間中被安裝起來的)。最直接的方法就是在紋理坐標空間和眼空間創建一個 1對1的對應關系,這個方法可以通過把紋理坐標生成方式設置成Eye Linear方式,同時把Eye Planes設置成1對1的映射:(具體見OpenGL的紋理坐標生成,D3D的方法也可以找到。)

  GLfloat Splane[] = {1.f, 0.f, 0.f, 0.f}; 
  GLfloat Tplane[] = {0.f, 1.f, 0.f, 0.f}; 
  GLfloat Rplane[] = {0.f, 0.f, 1.f, 0.f}; 
  GLfloat Qplane[] = {0.f, 0.f, 0.f, 1.f}; 

What about the texture coordinates for the primitives that the texture will be projected on? Since the projection and modelview parts of the matrix have been defined in terms of eye space (where the entire scene is assembled), the straightforward method is to create a 1-to-1 mapping between eye space and texture space. This can be done by enabling texture generation to eye linear and setting the eye planes to a one-to-one mapping: 

  GLfloat Splane[] = {1.f, 0.f, 0.f, 0.f}; 
  GLfloat Tplane[] = {0.f, 1.f, 0.f, 0.f}; 
  GLfloat Rplane[] = {0.f, 0.f, 1.f, 0.f}; 
  GLfloat Qplane[] = {0.f, 0.f, 0.f, 1.f}; 


你也可以使用物體空間的影射方式,但是建立影射的時候也要把Model/View變換包含在內。

You could also use object space mapping, but then you'd have to take the current modelview transform into account. 


  現在一切都做了。將會發生什么呢?當每個圖元被渲染的時候,紋理坐標對應的x,y,z值(頂點的坐標)將被生成的模型/視圖矩陣變換,然后經過紋理的變換矩陣的變換。首先應用一個視圖/模型和投影變換矩陣,這個矩陣將圖元的紋理坐標影射到規格化設備坐標(-1.1)。然后縮放和平移這個坐標。然后,對紋理圖象施加濾波和紋理環境操作。

  So when you've done all this, what happens? As each primitive is rendered, texture coordinates matching the x, y, and z values that have been transformed by the modelview matrix are generated, then transformed by the texture transformation matrix. The matrix applies a modelview and projection transform; this orients and projects the primitive's texture coordinate values into NDC space (-1 to 1 in each dimension). These values are scaled and biased into texture coordinates. Then normal filtering and texture environment operations are performed using the texture image. 

  當變換和紋理映射被施加在所有需要渲染的多邊形上的時候,如何去把投影紋理限制在一個單一的區域里呢?有許多的辦法可以達到這個目的的。最簡單的方法就是你當你打開投影紋理和設置紋理變換矩陣的時候僅僅渲染那些你試圖把紋理投射上去的多邊形。但是這樣方法是粗糙的。另外一個辦法是在多遍渲染中使用模板緩存的算法來控制場景中那些部分將被投影紋理更新。場景先不使用投影紋理來渲染一遍,然后使用Stencil Buffer來遮蓋一個特定的區域,并且場景以打開投影紋理的狀態下被重新渲染一遍。Stencil Buffer可以把整個不希望被使用投影紋理的區域都遮蓋住,這允許你創建一個投影圖象任意的輪廓線,或者把一個紋理投影到有紋理的表面上(就是多遍紋理了.而且不需ARB_Muti_Texture的支持哦。)

  If transformation and texturing is being applied to all the rendered polygons, how do you limit the projected texture to a single area? There are a number of ways to do this. One is to simply only render the polygons you intend to project the texture on when you have projecting texture active and the projection in the texture transformation matrix. But this method is crude. Another way is to use the stencil buffer in a multipass algorithm to control what parts of the scene are updated by a projected texture. The scene can be rendered without the projected texture, the stencil buffer can be set to mask off an area, and the scene re-rendered with the projected texture, using the stencil buffer to mask off all but the desired area. This can allow you to create an arbitrary outline for the projected image, or to project a texture onto a surface that has a surface texture. 

  這里有一個非常簡單的方法來實現一個非重復(Repeat)的紋理到一個沒有映射過紋理的表面:把紋理環境設置成GL_MODULATE。把紋理的重復設置成夾斷GL_CLAMP,然后把紋理的邊界顏色設置成白色。當投影紋理的時候,沒有被投射到紋理的表面將自動被設置成紋理的邊界顏色---白色,然后和白色調制在一起。這樣它們的顏色將保持不變,因為這相當于每個顏色成分都乘1。

  There is a very simple method that works when you want to project a non-repeating texture onto an untextured surface. Set the GL_MODULATE texture environment, set the texture repeat mode to GL_CLAMP, and set the texture border color to white. When the texture is projected, the surfaces outside the texture itself will default to the texture border color, and be modulated with white. This will leave the areas textured with the border color unchanged, since each color component will be scaled by one.

  紋理過濾和普通的紋理映射是一樣的。投影紋理相對屏幕像素是縮小和放大倍率來決定的。 當需要小的圖象時候,需要MipMapping來達到更好的結果。如果投影紋理在場景中移來移去的話,使用一個好的紋理過濾方法是很重要的。

  Filtering considerations are the same as for normal texturing; the size of the projected textures relative to screen pixels determines minification or magnification. If the projected image will be relatively small, mipmapping may be required to get good quality results. Using good filtering is especially important if the projected texture moves from frame to frame. 

  請注意,就像觀察和投影,紋理投影也不是完全符合光學原理的。除非采用特殊的方法,紋理將會影響到所有的表面----不管是前面的還是后面的(譯者注:就是電影機后面也會有電影圖象,當然這是不符合光學原理的)。因為沒有默認的視見體裁剪(View Volume ),應用程序需要小心的避免不希望出現的投影紋理效果。用戶自定義裁剪面(附加裁剪面)有助于更好的控制投影紋理該出現在什么地方。

  Please note that like the viewing projections, the texture projection is not really optical. Unless special steps are taken, the texture will affect all surfaces within the projection, both in front and in back of the projection. Since there is no implicit view volume clipping (like there is with the OpenGL viewing pipeline), the application needs to be carefully modeled to avoid undesired texture projections, or user defined clipping planes can be used to control where the projected texture appears.


文章列表


不含病毒。www.avast.com
arrow
arrow
    全站熱搜
    創作者介紹
    創作者 大師兄 的頭像
    大師兄

    IT工程師數位筆記本

    大師兄 發表在 痞客邦 留言(0) 人氣()