1.卡梅隆却是启用GPU最新的渲染技术,使用显卡渲染.他还指出,相比传统的cpu渲染,效率提高了好多倍.皮克斯动画公司也有相关类似的技术,皮克斯渲染器.实际上avatar这种渲染技术是基于皮克斯渲染器的,皮克斯和卡梅隆实际上当时是合作关系.....当然,并不是所有的cg都可以使用GPU渲染,比如说粒子动力学,较复杂的柔体动力学.....还有楼上有位提出光线追踪和光能传递,其实两者没有可以性.光线追踪是用于反光较为强烈,但散射效果相对较弱的物体,而光能传递在制作效果图的时候是用于墙面渲染,因为墙面散射量很大,所以计算的时候,非常耗资源,我家电脑 e7200的cpu渲染,比光线追踪效率低很多,但效果比光线追踪要真实一些.
2.效果图
3.辐射度法的文章连接
http://dev.gameres.com/Program/Visual/3D/Radiosity_Translation.htm
http://freespace.virgin.net/hugo.elias/radiosity/radiosity.htm
4.具体内容
辐射度算法
Hugo Elias
何咏 译
声明:本文原文由Hugo Elias撰写,由何咏翻译。本文仅供学习交流之用。
任何人未经本人同意不得私自转载,任何人不得将本文用于任何商业活动。
简介:这篇文章是一个经典的辐射度算法的教程,详细的讲述了如何通过辐射度算法为静态场景计算光照贴图。这也是大多数游戏所采用的技术。现在很多的论文和书籍都讨论了如何在实时渲染中应用光照贴图来产生逼真的光照效果,然而他们主要着重于如何组织光照贴图。而光照贴图究竟是怎样计算出来的,也就是全局照明算法,却极少有资料进行详细的解释。我在网上搜索到这篇文章,看了之后受益匪浅,于是决定将它翻译出来,让更多的人了解这方面的知识。如果对这篇文章有不理解的地方,可以联系作者,也可以和我共同讨论(我的网站:http://program.stedu.net)。如果翻译有错漏,也敬请指正和谅解,因为这毕竟是本人第一次翻译文章。如果你的英文水平不错,建议直接看原文:单击这里
光照和阴影投射算法可以大致地分为两大类:直接照明和全局照明。许多人都会对前者较为熟悉,同时也了解它所带来的问题。这篇文章将首先简要地介绍两种方法,然后将深入地研究一种全局照明算法,这就是辐射度。
直接照明
直接照明是一个被老式渲染引擎(如3D Studio、POV等)所采用的主要光照方法。一个场景由两种动态物体组成:普通物件和光源。光源在不被其他物件遮挡的情况下向某些物件投射光线,若光源被其他物体遮挡,则会留下阴影。
在这种思想之下有许多方法来产生阴影,如Shadow Volume(阴影体), Z缓冲方法,光线追踪等等。但由于它们都采用一个普遍的原则,因此这些方法都有同样的问题,而且都需要捏造一些东西来解决这些问题。
直接照明的优缺点:
|
需要考虑的最重要的问题是,由于这些方法会产生超越真实的图像,他们只能处理只有点光源的场景,而且场景中的物体都能做到完美地反射和漫反射。现在,除非你是某种富裕的白痴,你的房子可能并不是装满了完全有光泽的球体和点状的光源。事实上,除非你生活在完全不同的物理背景下的一个宇宙空间,你的房间是不可能出现任何超级锐利的阴影的。
人们宣称光线追踪器和其他渲染器能够产生照片级的真实效果是一件非常自然的事情。但想象如果有人拿一张普通光线追踪(这种渲染方法类似经典OpenGL光栅和光照渲染方法)的图片给你看,然后声称它是一张照片,你可能会回敬他是一个瞎子或者骗子。
同时也应该注意到,在真实世界里,我们仍然能看到不被直接照亮的物体。阴影永远都不是全黑的。直接照明的渲染器试图通过加入环境光来解决这样的问题。这样一来所有的物体都接受到一个最小的普遍直接照明值。
全局照明
全局照明方法试图解决由光线追踪所带来的一些问题。一个光线追踪器往往模拟光线在遇到漫反射表面时只折射一次,而全局照明渲染器模拟光线在场景中的多次反射。在光线追踪算法里,场景中的每个物体都必须被某个光源照亮才可见,而在全局照明中,这个物体可能只是简单的被它周围的物体所照亮。很快就会解释为什么这一点很重要。
全局照明的优缺点
由全局照明方法产生的图片看起来真正让人信服。这些方法独自成为一个联盟,让那些老式渲染器艰苦地渲染一些悲哀的卡通。但是,而且是一个巨大的“但是”:但是它们更加地慢。正像你可能离开你的光线追踪渲染器一整天,然后回来看着它产生地图像激动地发抖,在这儿也一样。
| 优点 | 缺点 |
辐射度算法: | - 非常真实的漫反射表面光照 | - 慢 |
蒙特卡罗法: | - 非常、非常好的效果 | - 慢 |
用直接照明照亮一个简单的场景我用3D Studio 对这个简单的场景进行了建模。我想让这个房间看起来就像被被窗外的太阳照亮一样。 因此,我设置了一个聚光灯照射进来。当我渲染它时,整个房间都几乎是黑色的,除了那一小部分能够被光射到的地方。 打开环境光只是让场景看起来呈现一种统一的灰色,除了地面被照射到的地方呈现统一的红色。 在场景中间加入点光源来展现更多细节,但场景并没有你想象中的被太阳照亮的房间那样的亮斑。 最后,我把背景颜色设为白色,来展现一个明亮的天空。 | |
|
|
用全局照明照亮这个简单的场景我用我自己的辐射度渲染器来渲染这个场景。我用Terragen渲染了一个天空盒来作为光源,并把它放置与窗户之外。除此之外没有使用任何其他光源。 无需任何其他工作,这个房间看起来被真实的照亮了。 注意以下几点有趣的地方: · 整个房间都被照亮并且可见,甚至那些背对者太阳的表面。 · 软阴影。 · 墙面上的亮度微妙地过度。 · 原本灰色地墙面,再也不是原始的灰色,在它们上面有了些温意。天花板甚至可以说是呈现了浅粉红色。 |
辐射度渲染器的工作原理
清空你脑子里任何你所知道的正常的光照渲染方法。你之前的经验可能会完全地转移你的注意力。
我想询问一个在阴影方面的专家,他会向你解释所有他们所知道的关于这个学科的东西。我的专家是在我面前的一小片墙上的油漆。
Hugo: "为什么你在阴影当中,而你身边的那一片跟你很相像的油漆却在光亮之中?"
油漆: "你什么意思?"
Hugo: "你是怎么知道你什么时候应该在阴影之中,什么时候不在? 你知道哪些阴影投射算法?你只是一些油漆而已啊。"
油漆: "听着,伙计。我不知道你在说什么。我的任务很简单:任何击中我的光线,我把它分散开去。"
Hugo: "任何光线?"
油漆: "是的。任何光线。我没有任何偏好。"
因此你应该知道了。这就是辐射度算法的基本前提。任何击中一个表面的光都被反射回场景之中。是任何光线。不仅仅是直接从光源来的光线。任何光线。这就是真实世界中的油漆是怎么想的,这就是辐射度算法的工作机制。
在接下来的文章中,我将详细讲解怎样制作你自己的会说话的油漆。
这样,辐射度渲染器背后的基本原则就是移除对物体和光源的划分。现在,你可以认为所有的东西都是一个潜在的光源。任何可见的东西不是辐射光线,就是反射光线。总之,它是一个光的来源,一个光源。一切周围你能看到的东西都是光源。这样,当我们考虑场景中的某一部分要接受多少光强时,我们必须注意把所有的可见物体发出的光线加起来。
基本前提:
1: 光源和普通物体之间没有区别。
2: 场景中的一个表面被它周围的所有可见的表面所照亮。
现在你掌握这个总要的思想。我将带你经历一次为场景计算辐射度光照的全过程。
一个简单的场景我们以这个简单的场景开始:一个有三扇窗户的房间。这里有一些柱子和凹槽,可以提供有趣的阴影。 它会被窗外的景物所照亮,我假设窗外的景物只有一个很小、很亮的太阳,除此之外一片漆黑。 |
现在,我们来任意选择一个表面。然后考察它上面的光照。 |
由于一些图形学中难以解决的问题,我们将把它分割成许多小片(的油漆),然后试着从他们的角度来观察这个世界。 从这里开始,我将使用面片来指代“一小片油漆”。
|
选取他们之中的一个面片。然后想象你就是那个面片。从这个角度,这个世界看起来应该是什么样子呢? |
一个面片的视角将我的眼睛贴紧在这个面片之上,然后看出去,我就能看见这个面片所看见的东西。这个房间非常黑,因为还没有光线进入。但是我把这些边缘画了出来以方便你辨认。 通过将它所看见的所有光强加在一起,我们能够计算出从场景中发出的所有能够击中这个面片的光强。我们把它成为总入射光强。 这个面片只能看见房间以及窗外漆黑的风景。把所有的入射光强加起来,我们可以看出没有光线射到这里。这个面片应该是一片黑暗。 |
一个较低处的面片的视角选择柱子上的一个稍低一些的面片。这个面片能够看到窗外明亮的太阳。这一次,所有的入射光强相加的结果表明有很多的光线到达这里(尽管太阳很小,但是它很亮)。这个面片被照亮了。
|
墙拄上的光照为墙拄上的每个面片重复这个过程,每次都为面片计算总入射光强之后,我们可以回头看看现在的柱子是什么样子。 在柱子顶部的面片,由于看不见太阳,处在阴影当中。那些能看见太阳的被照得很亮。而那些只能看见太阳的一部分的面片被部分地照亮了。 如此一来,辐射度算法对于场景中的每个其他的面片都用几乎一样的方式重复。正如同你所看到的,阴影逐渐地在那些不能看见光源的地方显现了。 |
整个房间的光照: 第一次遍历为每个面片重复这个过程,给我门带了这样的场景。除了那些能够从太阳直接接受光线的表面这外,所有的东西都是黑的。 因此,这看起来并不像是被很好地照亮了的场景。忽略那些光照看起来似乎是一块一块的效果。我们可以通过将场景分割为更多的面片来解决这个问题。更值得注意的是除了被太阳直接照射的地方都是全黑的。在这个时候,辐射度渲染器并没有体现出它与其他普通渲染器的不同。然而,我们没有就此而止。既然场景中的某些面片被照得十分明亮,它们自己也变成了光源,并且也能够向场景中的其他部分投射光线。 |
在第一次遍历之后面片的视角那些在上次遍历时不能看见太阳而没有接受到光线的面片,现在可以看到其他面片在发光了。因此在下次遍历之后,这些面片将变得明亮一些。 |
整个房间的光照:第二次遍历这一次,当你为每个面片计算完入射光强之后,上次全黑的面片现在正被照亮。这个房间开始变得有些真实了。 现在所发生的是太阳光照射到表面之后反射一次时,场景的效果。 |
整个房间的光照:第三次遍历第三次遍历产生了光线折射两次的效果。所有的东西看起来大致相同,只是轻微的亮了一些。 下一次遍历也仅仅时让场景更加明亮,甚至第16次遍历也并没有带来很大的不同。在那之后已经没有必要做更多的遍历了。 辐射度过程集中在一个光照解决方案上缓慢地进展。每一次遍历都给场景带来一些轻微地变化,直到产生的变化趋于稳定。根据场景复杂度的不同,以及表面的光照特性,可能需要几次或几千次遍历不等。这取决于你什么时候停止遍历,告诉它已经完成了。 |
第四次遍历 | 第十六次遍历 |
更加详细的算法描述: 面片
辐射光强(Emmision)
尽管我曾说过我们应该认为光源和普通物体是一样的,但场景中显然要有光发出的源头。在真实世界中,一些物体会辐射出光线,但有些不会。并且所有的物体会吸收某些波段的光。我们必须有某种方法区分出场景中那些能够辐射光线的物体。我们在辐射度算法中通过辐射光强来表 述这一点。我们认为,所有的面片都会辐射出光强,然而大多数面片辐射出的光强为0。这个面片的属性称为辐射光强(Emmision)。
反射率(Reflectance)
当光线击中表面时,一些光线被吸收并且转化为热能(我们可以忽略这一点),剩下的则被反射开去。我们称反射出去的光强比例为反射率(Reflectance)。
入射和出射光强(incident and excident lights)
在每一次遍历的过程中,记录另外两个东西是有必要的:有多少光强抵达一个面片,有多少光强因反射而离开面片。我们把它们称为入射光强和出射光强。出射光强是面片对外表现的属性。当我们观看某个面片时,其实是面片的出射光被我们看见了。
incident_light(入射光强) = sum of all light that a patch can see
excident_light(出射光强) = (incident_light*reflectance) + emmision
面片的数据结构
既然我们了解了一个面片的所有必要属性,我们就应该定义面片的数据结构了。稍后,我将解释这四个变量的细节。
structure PATCH
vec4 emmision
float reflectance
vec4 incident
vec4 excident
end structure
我已经讲解了算法的基础,下面将再次使用伪代码的形式加以讲解,让它更加具体。很显然这还是在一个较高的层次上,但我会在后面讲述更多的细节。
辐射度算法 伪代码: 级别 1
| 代码解释 initialize patches:(初始化面片) Passes Loop(遍历循环): each patch collects light from the scene(每个面片从场景中收集光强) calculate excident light from each patch(为每个面片计算出射光强):
|
实现辐射度: 半立方体(Hemi Cubes)
在实现辐射度算法的过程中,我们首先要解决的问题是,从面片的角度渲染世界。到目前为止我用了鱼眼视角来表示面片看到的场景,但这并不容易,也很难实现。有个更好的方法,半立方体!
半球体 将一个摄影机放置在半球体的中心,你可以看到视图看起来和普通的渲染是一样的。(右图) 如果你能找到一个办法能够很轻松地渲染到一个鱼眼视角,那么你可以把每个象素地亮度加起来以确定这个面片地入射光强。然而渲染一个鱼眼视角并不容易,所以我么你必须另寻方法。 |
|
半立方体 |
|
展开半立方体
想象着将一个半立方体展开. 你得到了什么? 一个正方形图片和四个长方形图片。中间的正方形的图片是从面片的位置,朝向面片的法向量直接渲染得到的。其他的四个部分是与法线呈90°的上、下、左、右方向的渲染结果。
因此,你可以很容易地得到每一个图片,你只需把摄像机放置在面片上,然后朝着前、上、下、左、右方向各渲染一副图像。四个边上的图片,当然应该分割成一半,因此只需要渲染一半的图像。
修正半立方体图像
这是一个三个同样大小的球体的视图。以90°的透视视角渲染,三个球距摄像机的距离相同,但由于透视变换的属性,在视图两边的物体被拉伸而占据了比中间的物体更大的屏幕面积。 如果这是半立方体正中间的图像,且三个球体都是光源,那么在图像边缘的物体投射到面片上的光线就会偏多。这会导致不精确性,因此我们必须修正这个问题。 如果你想用半立方体来计算总共的入射光强,并且仅将半立方体中的像素值都加起来,那些处在图像边缘的物体就会得到一个不公平的权重。这会向面片投射更多的光线。 为了弥补这一点,将图片边缘的像素变暗是有必要的。这样才能让所有的物体均匀地向面片投射光线。不管它们位于图像的那些位置,我不想完整地解释为什么,只想 告诉你这是怎样做的。 | |
半立方体表面的像素应乘以摄影机方向和光线入射方向之间的夹角的余弦值。 左边的贴图用来弥补这个失真。 |
兰伯特的余弦定律
任何初学计算机图形学的人都应该知道兰伯特的余弦定律:表面的亮度正比于表面法线和光源方向的夹角的余弦值。因此,我们在这里也应该应用这个定律。这只是简单地将半立方体图像与相关系数相乘。 左边是一张应用了余弦定律的贴图。白色代表1.0,黑色代表0.0。 |
两者叠加:乘法贴图
现在注意了,这一点非常重要: 将两个贴图相乘得到了这个贴图。这个贴图对于产生精确的辐射度解决方案是必要的。它用来调节透视投影带来的失真,也包括了兰伯特的余弦定律。 创建了这个贴图之后,正中间的值应该是1.0,四周角落的值应该是0.0。在它可以使用之前,这个贴图必须被单位化。 也就是说,贴图中所有的像素值之和应为1.0。 方法如下: · 对乘法贴图中所有的像素求和 · 将每个像素的值除以这个和. 现在,贴图中心的像素值应远小于1.0。 |
计算入射光强
这个过程在场景中选取一个点(通常是一个面片),以及改点所在表面的法向量,然后计算所有到达该点的光强。
首先,算法使用RenderView函数渲染半立方体的5个面。这个过程的参数包括一个点,描述了摄影机应放在哪里,以及一个向量,描述了摄影机正前方向,还有一个参数告诉这个过程要渲染半立方体的 哪个面。这5张图片存储在hemicube的结构里,记为H(下图的左列)。
一旦半立方体H被渲染完毕,它就与乘法贴图M相乘(下图中间列)。结果存储在半立方体R中(下图右列)。
之后,R中的所有像素值相加后除以半立方体的像素总数,这就得到了该点的入射光强。
procedure Calc_Incident_Light(point: P, vector: N) |
对伪代码中的变量类型的说明
light: 用于存储光照强度,如:
structure light
float Red
float Green
float Blue
end structure
hemicube: 用于存储从某一点所观察到的场景。一个半立方体应包含5个图片,如之前所说明的那样,每个像素的类型都是light。对于乘法半立方体来说,所存储的并不是一个光照强度值,而是一些小于1.0乘法因子。之前已经说明。
structure hemicube
image front
image up
image down
image left
image right
end structure
camera: 如:
structure camera
point lens
vector direction
end structure
增加解决方案的精确度
你可能会自己想到,这种鬼东西似乎要很多的渲染过程。做这些东西使得处理器处在高强度状态。你当然是正确的。基本上你不得不渲染几千次带有纹理的场景。
所幸的是,这是一个自从黎明破晓的时候人们就在研究的问题了。自从光栅显示器诞生的那一刻起,自从那个时候就有了关于如何快速渲染带有纹理的场景的许多工作。我不会在这一方面走得太深,我确实不是一个最具资格的人来讨论如何优化渲染过程。我自己的渲染器是如此的慢以致于你会用诅咒的语言来描述它。算法本身很适合用3D硬件来加速,可是你必须做 一些额外的前期准备工作来让硬件渲染32位的纹理。
我即将讨论的速度优化方法不会关心具体的加速半立方体的渲染方法,但是会讨论如何减少半立方体的渲染次数。你会,也理应会注意到光照贴图看起来呈现一种低分辨率的块状,但不要怕,它们的分辨率可以根据你的需要进行调节。
看一些左边用红线标出的表面。光照效果基本上十分简单,有一个较亮的区域,还有一个不太亮的区域,两者之间有一条相当锐利的界线。要减少边缘的锐利程度,你一般情况下需要一个更高分辨率的光照贴图,因此必须渲染更多的半立方体。但是似乎并不 值得为那些较黑或较亮的区域计算过多的半立方体,处在这两个区域之中的面片的颜色几乎是一致的。但是在锐利的边缘附近多渲染一些半立方体会更加有价值,而对那些处在亮或暗区域之中的面片则不需要过于细分。 这是十分简单的。我即将讲述的算法将渲染少量的半立方体均匀地覆盖在表面上,然后在靠近边缘的区域渲染更多的半立方体,对于剩下的光照贴图纹素,仅用线性插值来填充。 |
| 算法: 在左下角你可以看见正在被创建的光照贴图。在它旁边,你能看到有些像素通过计算半立方体来确定,而有些通过线性插值来决定。 | |
1:使用半立方体为每4个像素确定一个值. (左图红色的点) 这些像素在右图用表示. | ||
2: 遍历1: 检查相邻两个之间的值的差。如果这个差大于某个阈值,则为像素(左图绿色区域)单独渲染半立方体。否则像素的值由插值决定。 | ||
3: 遍历2: 检查位于四个像素中心的像素 。如果相邻的两个像素差别太大,为这个像素单独渲染半立方体,否则使用线性插值决定像素的颜色值。 | ||
4: 遍历 1: 如同第二步,只是空间缩小一半。 | ||
5: 遍历 2: 如同第三步,只是空间缩小一半。 |
你应该能够看到,在左边的图中,大多数光照贴图像素都是通过线性插值决定的。事实上,对于一个由1769个象素的光照贴图来说,仅有563个像素是通过渲染半立方体来决定的。而另外1206个像素是通过线性插值决定的。现在,由于渲染一个 半立方体需要非常长的时间,比起几乎不花费时间的线性插值,这个方法是速度提升了大约60%!
至此,这个方法还不是完美的。它偶尔会错过光照贴图上一些细节。但在大多数情况下它的结果是非常好的。有个简单的方法来捕获微小的细节,但我把它留给你自己去思考。
以下是伪代码,注释就不翻译了。
#### CODE EDITING IN PROGRESS - BIT MESSY STILL #### float ratio2(float a, float b) { if ((a==0) && (b==0)) return 1.0; if ((a==0) || (b==0)) return 0.0; if (a>b) return b/a; else return a/b; } float ratio4(float a, float b, float c, float d) { float q1 = ratio2(a,b); float q2 = ratio2(c,d); if (q1<q2) return q1; else return q2; } procedure CalcLightMap() vector normal = LightMap.Surface_Normal float Xres = LightMap.X_resolution float Yres = LightMap.Y_resolution point3D SamplePoint light I1, I2, I3, I4 Accuracy = Some value greater than 0.0, and less than 1.0. Higher values give a better quality Light Map (and a slower render). 0.5 is ok for the first passes of the renderer. 0.98 is good for the final pass. Spacing = 4 Higher values of Spacing give a slightly faster render, but will be more likely to miss fine details. I find that 4 is a pretty reasonable compromise. // 1: Initially, calculate an even grid of pixels across the Light Map. // For each pixel calculate the 3D coordinates of the centre of the patch that // corresponds to this pixel. Render a hemicube at that point, and add up // the incident light. Write that value into the Light Map. // The spacing in this grid is fixed. The code only comes here once per Light // Map, per render pass. for (y=0; y<Yres; y+=Spacing) for (x=0; x<Xres; x+=Spacing) { SamplePoint = Calculate coordinates of centre of patch incidentLight = Calc_Incident_Light(SamplePoint, normal) LightMap[x, y] = incidentLight } // return here when another pass is required Passes_Loop: threshold = pow(Accuracy, Spacing) // 2: Part 1. HalfSpacing = Spacing/2; for (y=HalfSpacing; y<=Yres+HalfSpacing; y+=Spacing) { for (x=HalfSpacing; x<=Xres+HalfSpacing; x+=Spacing) { // Calculate the inbetween pixels, whose neighbours are above and below this pixel if (x<Xres) // Don't go off the edge of the Light Map now { x1 = x y1 = y-HalfSpacing // Read the 2 (left and right) neighbours from the Light Map I1 = LightMap[x1+HalfSpacing, y1] I2 = LightMap[x1-HalfSpacing, y1] // If the neighbours are very similar, then just interpolate. if ( (ratio2(I1.R,I2.R) > threshold) && (ratio2(I1.G,I2.G) > threshold) && (ratio2(I1.B,I2.B) > threshold) ) { incidentLight.R = (I1.R+I2.R) * 0.5 incidentLight.G = (I1.G+I2.G) * 0.5 incidentLight.B = (I1.B+I2.B) * 0.5 LightMap[x1, y1] = incidentLight } // Otherwise go to the effort of rendering a hemicube, and adding it all up. else { SamplePoint = Calculate coordinates of centre of patch incidentLight = Calc_Incident_Light(SamplePoint, normal) LightMap[x1, y1] = incidentLight } } // Calculate the inbetween pixels, whose neighbours are left and right of this pixel if (y<Yres) // Don't go off the edge of the Light Map now { x1 = x-HalfSpacing y1 = y // Read the 2 (up and down) neighbours from the Light Map I1 = LightMap[x1,y1-HalfSpacing]; I2 = LightMap[x1,y1+HalfSpacing]; // If the neighbours are very similar, then just interpolate. if ( (ratio2(I1.R,I2.R) > threshold) && (ratio2(I1.G,I2.G) > threshold) && (ratio2(I1.B,I2.B) > threshold) ) { incidentLight.R = (I1.R+I2.R) * 0.5 incidentLight.G = (I1.G+I2.G) * 0.5 incidentLight.B = (I1.B+I2.B) * 0.5 LightMap[x1,y1] = incidentLight } // Otherwise go to the effort of rendering a hemicube, and adding it all up. else { SamplePoint = Calculate coordinates of centre of patch incidentLight = Calc_Incident_Light(SamplePoint, normal) LightMap[x1, y1] = incidentLight } }//end if }//end x loop }//end y loop // 3: Part 2 // Calculate the pixels, whose neighbours are on all 4 sides of this pixel for (y=HalfSpacing; y<=(Yres-HalfSpacing); y+=Spacing) { for (x=HalfSpacing; x<=(Xres-HalfSpacing); x+=Spacing) { I1 = LightMap[x, y-HalfSpacing] I2 = LightMap[x, y+HalfSpacing] I3 = LightMap[x-HalfSpacing, y] I4 = LightMap[x+HalfSpacing, y] if ( (ratio4(I1.R,I2.R,I3.R,I4.R) > threshold) && (ratio4(I1.G,I2.G,I3.G,I4.G) > threshold) && (ratio4(I1.B,I2.B,I3.B,I4.B) > threshold) ) { incidentLight.R = (I1.R + I2.R + I3.R + I4.R) * 0.25 incidentLight.G = (I1.G + I2.G + I3.G + I4.G) * 0.25 incidentLight.B = (I1.B + I2.B + I3.B + I4.B) * 0.25 LightMap[x,y] = incidentLight } else { SamplePoint = Calculate coordinates of centre of patch incidentLight = Calc_Incident_Light(SamplePoint, normal) LightMap[x, y] = incidentLight; } } } Spacing = Spacing / 2 Stop if Spacing = 1, otherwise go to Passes_Loop |
点光源
人们普遍认为辐射度算法不能很好地处理点光源。从某种程度上讲确实如此,但在真实场景中出现点光源几乎是不可能的。 我试过向场景增加点状物体作为光源,使它作为粒子像素(Wu-Pixel)被渲染。在渲染半立方体时,它们作为一个明亮的像素出现在渲染出来的图片上,因而向面片投射闪耀的光。它运行得基本正确,但是渲染出来的图像 会出现无法令人接受的假相。右图所示的场景被三个点状聚光灯所照亮,其中的两个光源位于柱子的背后,还有一盏光源位于图片左上角附近,方向指向照相机。场景从这个角度看起来良好,但如果摄影机来回移动,就会出现令人厌恶的假相。
| |
你可以看到,在下方的图片里,出现了三条暗线。这看起来似乎是因为点光源在靠近半立方体边缘的地方就消失了。可能如果我的数学好些的话就不会这么糟糕,但我想就算那样也还是会有引人注意的赝像。 因此,与其将点光源渲染到半立方体中,你不如使用光线追踪,在面片和点光源之间投射光线。 |
使用3D渲染硬件加速
辐射度算法的一个好的方面是它能够十分容易地通过使用3D渲染硬件来优化。只要你能让它做直接的纹理映射,并且关闭着色器、反锯齿以及多贴图(Mip-maps)等。
具体的优化方法可能并不是你所想象的那样,但它工作得很好,可以让CPU和渲染硬件平行工作。硬件负责处理纹理映射以及隐面消除(Z缓冲),而CPU负责剩下的工作。
就我所知,目前还没有渲染硬件能够处理浮点型的光照数据,甚至不能处理超过255的光照强度。因此我们无法让它直接渲染带有光照信息的场景,然而,只需一点小技巧,我们可以让硬件来做纹理映射和隐面消除,只要你通过一个快速的循环回读光照信息。
如果3D硬件可以向屏幕写入33位像素,那么它就可以用来写入任何32位的值。3D硬件并不能真正地向屏幕写入浮点型的RGB值,但它可以写入32位的指针,指向那儿应当出现的面片。一旦有了这个渲染结果,你只需简单地读出每个像素,让后使用它的32位值作为指针定位到那儿应当被渲染的面片。
这是从上面的场景中取出的一张面片贴图。每个像素都由3个浮点数构成,它们分别代表R、G、B通道的值。因此3D硬件不能直接处理这样的贴图。 |
| 这里是另外一张贴图。它看起来十分的怪异,但是现在不要理会它看起来的样子。在这个贴图中每个像素都是一个32位的值,代表了左图中相应面片的指针。 为什么这个贴图的颜色看起来会有这种效果是因为最低位的三个字节被解释成了颜色。 |
一旦你有了一整套这样的指针纹理(每个表面都有一个),你可以把它传给3D硬件让其渲染。渲染出来的场景看起来像右图这样。 场景看起来十分奇怪,但你可以辨认出这些表面都覆盖了像上图那样的纹理。这些像素不应该被理解为颜色,而应该是指针。如果你的显卡使用32位的纹理,那么它会以ARGB的形式出现,且A、R、G、B各 占8位。不要理会这样的结构,把每个像素都当作一个32位的值,把它们作为内存指针来重新创建被面片覆盖的场景。 重要: 你必须确保场景是被纯纹理映射的。这就是说:无线性插值,无动感模糊,无着色器/光照,无多贴图,无雾效、无伽马校正。如果你不禁用这些,产生出来的地址就不会指向正确的地方,你的程序肯定会崩溃。 |
如何优化辐射度计算过程应该解释得比较清楚了。如果还有不明白的方,告诉我,我会试着加上更多的解释。
误会和迷惑
(如何使用渲染出来的光照贴图)
辐射度渲染器的输出是一幅每个像素都是三个浮点数的图片。图片中亮度的范围可能十分的广泛。如同我之前所说的那样,天空的亮度比室内表面的亮度要亮的多。而且太阳还要比它亮几千倍。你怎样处理这样的图片呢?
你普通的显示器最多只能产生很暗的光线,并不比室内的表面所发出的光亮多少。如果要显示这样的图片就需要一个能发出和太阳光一样强烈的光线的显示器。而且显卡对于每个通道都要有一个浮点数。这些东西在技术上并不存在,更别说安全因素。那么你应该怎么做呢?
很多人对照片对真实世界的记录感到满意,并且认可它是对真实世界忠实的记录。他们错了。照片重现现实中的光亮的本领并不比显示器强。照片不能发出像太阳一样强烈的光,但人们却从不怀疑它的真实性,这就是 让人迷惑的地方。
人的视觉
我们的视觉正是我们最重要的感观。我们每天的生活都信任它,但到目前为止对它的信任还没让我们死亡。它常常拯救了我们的生命。对于我们的祖先来说,这也是一个重要的感观,对于最早的后来发展为人类的鱼来说也是如此。我们的眼球经历了一段漫长的进化过程,对我们的生存有着至关重要的作用,因此它们确实十分的优秀。它们能够感受到非常低的光照度,最低能低到5个光子,同样还能感受到非常明亮的天空。眼球并不是视觉系统的唯一部分,可能更重要的是它们之后的大脑。一个令人难以相信的神经网络,由多层处理过程组成,然后把眼球的输出转化为我们对眼前事物的感知。大脑必须能够辨认出同样的物体,无论它是被照亮还是非常的暗。而且还能做一些让令人惊奇的光线补偿。我们不曾注意到当我们从非常亮的室外走到一个被暗淡的黄色光源照亮的室内时,眼前光亮程度的变化。当你在这两种情况下拍照时,你可能需要改变不同型号的胶卷来 防止照出来的图片很黄或很黑。
试着这样做:在一个阴天出门,站在一个白色物体前,然后看着天上的云,你会觉得这个物体变成灰色了,但你重新看回这个物体时,它又恢复了白色。这说明什么问题?白色的东西被灰色的云所照亮,因此不会存在比云更亮的东西了。但是我们 仍然能够区分出它是白色的。如果你不相信我,以天上的云为背景给白色的物体照一张相。你会看见白色的东西比云暗得多。
别相信你的眼睛: 它们比你聪明得多。
那么你要怎么做呢?既然人们信服照片带来的真实性,我们可以把渲染器的输出作为场景中的光照模型,然后仿照照相机的原理做一个粗糙的近似。我已经写了一篇关于这部分的文章:Exposure, 因此我就不再多费口舌了。
参考文献
The Solid Map: Methods for Generating a 2D Texture Map for Solid Texturing: http://graphics.cs.uiuc.edu/~jch/papers/pst.pdf
This paper will be very useful if you are going to try to implement your own radiosity renderer. How do you apply a texture map evenly, and without distortion across some arbitary polygonal object? A radiosity renderer will need to do this.
Helios32: http://www.helios32.com/
Offers a platform-independent solution for developers looking for radiosity rendering capabilities.
Radiosity In English: http://www.flipcode.com/tutorials/tut_rad.shtml
As the title suggests this is an article about Radiosity, written using English words. I didn't understand it.
Real Time Radiosity: http://www.gamedev.net/reference/programming/features/rtradiosity2/
That sounds a little more exciting. There doesn't seem to be a demo though.
An Empirical Comparison of Radiosity Algorithms: http://www.cs.cmu.edu/~radiosity/emprad-tr.html
A good technical article comparing matrix, progressive, and wavelet radiosity algorithms. Written by a couple of the masters.
A Rapid Hierarchical Radiosity Algorithm: http://graphics.stanford.edu/papers/rad/
A paper that presents a rapid hierarchical radiosity algorithm for illuminating scenes containing large polygonal patches.
KODI's Radiosity Page : http://ls7-www.informatik.uni-dortmund.de/~kohnhors/radiosity.html
A whole lot of good radiosity links.
Graphic Links: http://web.tiscalinet.it/GiulianoCornacchiola/Eng/GraphicLinks6.htm
Even more good links.
Rover: Radiosity for Virtual Reality Systems: http://www.scs.leeds.ac.uk/cuddles/rover/
*Very Good* A thesis on Radiosity. Contains a large selection of good articles on radiosity, and very many abstracts of papers on the subject.
Daylighting Design: http://www.arce.ukans.edu/book/daylight/daylight.htm
A very indepth article about daylight.
译者注:写这篇文章的时候,HDR还没有成为一个主流的技术。而现在,你可以把辐射度渲染器输出的光照贴图作为HDR的来源,产生逼真的动态亮度变化和光晕效果。
Radiosity
Lighting and shadow casting algorithms can be very roughly divided into two categories; Direct Illumination and Global Illumination. Many people will be familiar with the former category, and the problems associated with it. This article will briefly discuss the two approaches, then give an in-depth study of one Global Illumination method, Radiosity.
Direct Illumination
Direct Illumination is a term that covers the principal lighting methods used by old school rendering engines such as 3D Studio and POV. A scene consists of two types of entity: Objects and Lights. Lights cast light onto Objects, unless there is another Object in the way, in which case a shadow is left behind.There are all sorts of techniques under this heading: Shadow Volumes, Z-Buffer methods, Ray Tracing . . . But as a general rule, they all suffer from similar problems, and all require some kind of fudge in order to overcome them.
Direct Illumination Problems and Advantages
|
It it quite common for people to claim that ray tracers and other renderers produce 'photo-realistic' results. But imagine someone were to show you a typical ray traced image, and claim it was a photo. You would claim in return that they were blind or lying.
It should also be noted that, in the real world, it is still possible to see objects that are not directly lit; shadows are never completely black. Direct Illumination renderers try to handle such situations by adding an Ambient Light term. Thus all objects receive a minimum amount of uni-directional light.
Global Illumination
Global illumination methods try to overcome some of the problems associated with Ray Tracing. While a Ray Tracer tends to simulate light reflecting only once off each diffuse surface, global illumination renderers simulate very many reflections of light around a scene.While each object in a Ray Traced scene must be lit by some light source for it to be visible, an object in a Globally Illuminated scene may be lit simply by it's surroundings.
The reason this makes a difference will become clear soon.
Global Illumination Problems and Advantages
Images produced by global illumination methods can look very convincing indeed; in a league of their own, leaving old skool renderers to churn out sad cartoons. But, and it's a big 'but': 'BUT!' they are slower. Just as once you may have left your ray tracer all day, and come back to be thrilled by the image it produced, you will be doing the same here.
|
Lighting a simple scene with Direct LightingI modeled this simple scene in 3D Studio. I wanted the room to look as if it was lit by the sun shining in through the window. So, I set up a spotlight to shine in. When I rendered it, the entire room was pitch black, except for a couple of patches on the floor that the light reached. | |
Lighting a simple scene with Global LightingI modeled the same scene in my own radiosity renderer. To provide the source of light, I rendered an image of the sky with Terragen, and placed it outside the window. No other source of light was used.With no further effort on my part, the room looks realistically lit.
|
The Workings of a Radiosity Renderer
Clear your mind of anything you know about normal rendering methods. Your previous experiences may simply distract you.I would now like to ask an expert on shadows, who will explain to you everything they know about the subject. My expert is a tiny patch of paint on the wall in front of me.
hugo: "Why is it that you are in shadow, when a very similar patch of paint near you is in light?"
paint: "What do you mean?"
hugo: "How is it you know when to be in shadow, and when not to be? What do you know about shadow casting algorithms? You're just some paint."
paint: "Listen mate. I don't know what you're talking about. My job is a simple one: any light that hits me, I scatter back."
hugo: "Any light?"
paint: "Yes, any light at all. I don't have a preference."
So there you have it. The basic premise of Radiosity. Any light that hits a surface is reflected back into the scene. That's any light. Not just light that's come directly from light sources. Any light. That's how paint in the real world thinks, and that's how the radiosity renderer will work.
In my next article, I will be explaining how you can make your own talking paint.
So, the basic principal behind the radiosity renderer is to remove the distinction between objects and light sources. Now, you can consider everything to be a potential light source.
Anything that is visible is either emitting or reflecting light, i.e. it is a source of light. A Light Source. Everything you can see around you is a light source. And so, when we are considering how much light is reaching any part of a scene, we must take care to add up light from all possible light sources.
Basic Premises:
1: There is no difference between light sources and objects.2: A surface in the scene is lit by all parts of the scene that are visible to it.
Now that you have the important things in mind. I will take you through the process of performing Radiosity on a scene.
A Simple SceneWe begin with a simple scene: a room with three windows. There are a couple of pillars and some alcoves, to provide interesting shadows.It will be lit by the scenery outside the windows, which I will assume is completely dark, except for a small, bright sun. |
Now, lets choose one of the surfaces in the room, and consider the lighting on it. |
As with many difficult problems in computer graphics, we'll divide it up into little patches (of paint), and try to see the world from their point of view. From now on I'll refer to these patches of paint simply as patches. |
Take one of those patches. And imagine you are that patch. What does the world look like from that perspective? |
View from a patchPlacing my eye very carefully on the patch, and looking outwards, I can see what it sees. The room is very dark, because no light has entered yet. But I have drawn in the edges for your benefit.By adding together all the light it sees, we can calculate the total amount of light from the scene reaching the patch. I'll refer to this as the total incident light from now on. This patch can only see the room and the darkness outside. Adding up the incident light, we would see that no light is arriving here. This patch is darkly lit. |
View from a lower patchPick a patch a little further down the pillar. This patch can see the bright sun outside the window. This time, adding up the incident light will show that a lot of light is arriving here (although the sun appears small, it is very bright). This patch is brightly lit. |
Lighting on the PillarHaving repeated this process for all the patches, and added up the incident light each time, we can look back at the pillar and see what the lighting is like.The patches nearer the top of the pillar, which could not see the sun, are in shadow, and those that can are brightly lit. Those that could see the sun partly obscured by the edge of the window are only dimly lit. And so Radiosity proceeds in much the same fashion. As you have seen, shadows naturally appear in parts of the scene that cannot see a source of light. |
Entire Room Lit: 1st PassRepeating the process for every patch in the room, gives us this scene. Everything is completely dark, except for surfaces that have received light from the sun.So, this doesn't look like a very well lit scene. Ignore the fact that the lighting looks blocky; we can fix that by using many more patches. What's important to notice is that the room is completely dark, except for those areas that can see the sun. At the moment it's no improvement over any other renderer. Well, it doesn't end here. Now that some parts of the room are brightly lit, they have become sources of light themselves, and could well cast light onto other parts of the scene. |
View from the patch after 1st PassPatches that could not see the sun, and so received no light, can now see the light shining on other surfaces. So in the next pass, this patch will come out slightly lighter than the completely black it is now. |
Entire Room Lit: 2nd PassThis time, when you calculate the incident light on each patch in the scene, many patches that were black before are now lit. The room is beginning to take on a more realistic appearance.What's happened is that sun light has reflected once from the floor and walls, onto other surfaces. |
Entire Room Lit: 3rd PassThe third pass produces the effect of light having reflected twice in the scene. Everything looks pretty much the same, but is slightly brighter.The next pass only looks a little brighter than the last, and even the 16 th is not a lot different. There's not much point in doing any more passes after that. The radiosity process slowly converges on a solution. Each pass is a little less different than the last, until eventually it becomes stable. Depending on the complexity of the scene, and the lightness of the surfaces, it may take a few, or a few thousand passes. It's really up to you when to stop it, and call it done. |
4th Pass | 16th Pass |
The Algorithm In More Detail: Patches
Emmision
Though I have said that we'll consider lightsources and objects to be basically the same, there must obviously be some source of light in the scene. In the real world, some objects do emit light, and some don't, and all objects absorb light to some extent. We must somehow distinguish between parts of the scene that emit light, and parts that don't. We shall handle this in radiosity by saying that all patches emit light, but for most patches, their light emmision is zero. This property of a patch, I'll call emmision.
Reflectance
When light hits a surface, some light is absorbed and becomes heat, (we can ignore this) and the rest is reflected. I'll call the proportion of light reflected by a patch reflectance.
Incident and Excident Light
During each pass, it will be necessary to remember two other things, how much light is arriving at each patch, and how much light is leaving each patch. I'll call these two, incident_light andexcident_light. The excident light is the visible property of a patch. When we look at a patch, it is the excident light that we're seeing.
incident_light = sum of all light that a patch can seeexcident_light = (incident_light*reflectance) + emmision
Patch structure
Now that we know all the necessary properties of a patch, it's time to define a patch. Later, I'll explain the details of the four variables.
structure PATCHemmisionreflectanceincidentexcidentend structure
Now that I've explained the basics of the algorithm, I'll tell it again in pseudocode form, to make it concrete. Clearly this is still quite high level, but I'll explain in more detail later.
Radiosity Pseudocode: Level 1
| Explanation of Code initialise patches: Passes Loop: each patch collects light from the scene calculate excident light from each patch:
This process must be repeated many times to get a good effect. If the renderer needs another pass, then we jump back to Passes_Loop. |
Implementing Radiosity: Hemicubes
The first thing we'll have to deal with, in implementing radiosity, is to solve the problem of looking at the world from the point of view of each patch. So far in this article I have used a fish-eye view to represent a patch's eye view of the scene, but this isn't easy or practical. There is a much better way, the Hemicube!The Hemisphere Imagine a fish eye view wrapped onto a hemisphere. Place the hemisphere over a patch (left: red square), and from that patch's point of view, the scene wrapped on the inside of the hemisphere looks just like the scene from it's point of view. There's no difference. Placing a camera in the middle of the hemisphere, you can see that the view looks just like any other rendering of the scene (right). If you could find a way to render a fisheye view easily, then you could just sum up the brightness of every pixel to calculate the total incident light on the patch. However, it's not easy to render a fisheye view, and so some other way must be found to calculate the incident light. | Rendering from the centre of the hemisphere |
The Hemicube Surprisingly (or unsurprisingly, depending on how mathematical you are) a hemicube looks exactly the same as a hemisphere from the patch's point of view. | Rendering from the centre of the hemicube |
Unfolding the Hemicube
So, you can easily produce each of these images by placing a camera on a patch, and render it pointing forwards, up, down, left and right. The four side images are, of course, cut in half, and so, only half a rendering is required there.
Compensating for the hemicube's shape
This is view of 3 spheres, rendered with a 90° field of view. All three spheres are the same distance from the camera, but because of the properties of perspective transformation, objects at the edge of the image appear spretched and larger than ones in the middle. If this was the middle image of a hemicube, and the three spheres were light sources, then those near the edge would cast more light onto the patch than they should. This would be inaccurate, and so we must compensate for this. If you were to use a hemicube to calculate the total incident light falling on a patch, and just added together the values of all the pixel rendered in the hemicube, you would be giving an unfair weight to objects lying at the corners of the hemicube. They would appear to cast more light onto the patch. To compensate for this, it is necessary to 'dim' the pixels at the edges and corners, so that all objects contribute equally to the incident light, no matter where they may lie in the hemicube. Rather than give a full explanation, I'm just going to tell you how this is done. | |
Pixels on a surface of the hemicube are multiplied by the cosine of the angle between the direction the camera is facing in, and the line from the camera to the pixel. On the left is an image of the map used to compensate for the distortion. (shown half size relative to the image above) |
Lambert's Cosine Law
Any budding graphics programmer knows Lambert's cosine law: The apparent brightness of a surface is proportional to the cosine of the angle between the surface normal, and the direction of the light. Therefore, we should be sure to apply the same law here. This is simply done by multiplying pixels on the hemicube by the relevant amount. On the left is an image of the map used to apply Lambert's law to the hemicube. White represents the value 1.0, and black represents the value 0.0. (shown half size relative to the image above) |
The two combined: The Multiplier Map
Now pay attention, this is important: Multiplying the two maps together gives this. This map is essential for producing an accurate radiosity solution. It is used to adjust for the perspective distortion, mentioned above, that causes objects near the corners of the hemicubes to shine too much light onto a patch. It also gives you Lambert's Cosine Law. Having created this map, you should have the value 1.0 right at the centre, and the value 0.0 at the far corners. Before it can be used, the map must be normalised. The sum of all pixels in the map should be 1.0.
|
Calculating the Incident Light
This procedure takes a point in the scene (usually a patch), along with a normal vector, and calculates the total ammount of light arriving at that point.First, it renders the 5 faces of the hemicube using the procedure RenderView(point, vector, part). This procedure takes as it's arguments a point, telling it where the camera should be for the rendering, a vector, telling it what direction the camera should be pointing in, and another argument telling it which part of the final image should be rendered. These 5 images are stored in hemicube structure called H (left column of images below).
Once the hemicube H has been rendered, it is multiplied by the multiplier hemicube M (middle column of images below), and the result is stored in the hemicube R (right column of images below).
Then the total value of the light in R is added up and divided by the number of pixels in a hemicube. This should give the total amount of light arriving at the point in question.
|
Explanation of Variable Types in Pseudocode
light: Used for storing any light value. For example:structure lightfloat Redfloat Greenfloat Blueend structure
hemicube: used for storing the view of a scene from the point of view of some point in the scene. A Hemicube would consist of five images, as illustrated above, where each pixel was of type light. In the case of the Multiplier Hemicube, what is stored is not a value of light, but some multiplier value less than 1.0, as illustrated above.
structure hemicubeimage frontimage upimage downimage leftimage rightend structurecamera: for example
structure camerapoint lensvector directionend structure
Increasing the accuracy of the solution
You'll be thinking to yourself, 'damn, this seems like a whole lot of rendering. A very processor intensive way of doing things.' You'd be right of course. Basically you have to render a texture mapped scene many thousands of times.Fortunately, this is something people have been doing since the dawn of time. Um, since the dawn of the raster display, and since then there has been much work put into rendering texture mapped scenes as fast as possible. I won't go into a whole lot of detail here, I'm really not the person best qualified to be talking about optimised rendering. My own renderer is so slow you have to use cussing words to describe it. The algorithm also lends itself well to optimisation with standard 3D graphics hardware, though you have do some fiddling and chopping to get it to render (3x32) bit textures.
The speed improvement I'm going to discuss in this article does not concern optimising the actual rendering of the hemicubes, but rather reducing the number of hemicubes that need to be rendered. You will, of course, have noticed that the light maps illustrated in the black and white renderings above were somewhat blocky, low resolution. Don't fear, their resolution can be increased as far as you want.
Take a look at the surface on the left, outlined in red. The lighting is basically very simple, there's a bright bit, and a less bright bit, with a fairly sharp edge between the two. To reproduce the edge sharply, you would normally need a high resolution light map and, therefore, have to render very many hemicubes. But it hardly seems worthwhile rendering so many hemicubes just to fill in the bright or less-bright areas which are little more than solid colour. It would be more worthwhile to render a lot of hemicubes near the sharp edge, and just a few in the other areas. Well, it is possible, and quite straightforward. The algorithm I will describe below will render a few hemicubes scattered across the surface, then render more near the edges, and use linear interpolation to fill in the rest of the light map. |
The Algorithm: On the far left you can see the light map in the process of being generated. Next to it, you can see which pixels were produced using a hemicube (red) and which were linearly interpolated (green).
| ||
1: Use a hemicube to calculate every 4th pixel. I'll show these pixels on the right as . | ||
2: Pass Type 1: Examine the pixels which are horizontally or vertically halfway between previously calculated pixels . If the neighbouring pixels differ by more than some threshold amount, then calculate this pixel using a hemicube, otherwise, interpolate from the neighbouring pixels. | ||
3: Pass Type 2: Examine the pixels which are in the middle of a group of 4 pixels. If the neighbours differ by much, then use a hemicube for this pixel, otherwise use linear interpolation. | ||
4: Pass Type 1: Same as step 2, but with half the spacing. | ||
5: Pass Type 2: Same as step 3, but with half the spacing. |
You should be able to see, from the maps on the left, that most of the light map was produced using linear interpolation. In fact, from a total of 1769 pixels, only 563 were calculated by hemicube, and 1206 by linear interpolation. Now, since rendering a hemicube takes a very long time indeed, compared to the negligable time required to do a linear interpolation, it represents a speed improvement of about 60% !
Now, this method is not perfect, and it can occasionally miss very small details in a light map, but it's pretty good in most situations. There's a simple way to help it catch small details, but I'll leave that up to your own imagination.
#### CODE EDITING IN PROGRESS - BIT MESSY STILL ####float ratio2(float a, float b){if ((a==0) && (b==0)) return 1.0;if ((a==0) || (b==0)) return 0.0;if (a>b) return b/a;else return a/b;}float ratio4(float a, float b, float c, float d) {float q1 = ratio2(a,b);float q2 = ratio2(c,d);if (q1<q2) return q1;else return q2;}procedure CalcLightMap()vector normal = LightMap.Surface_Normalfloat Xres = LightMap.X_resolutionfloat Yres = LightMap.Y_resolutionpoint3D SamplePointlight I1, I2, I3, I4Accuracy = Some value greater than 0.0, and less than 1.0. Higher values give a better quality Light Map (and a slower render).0.5 is ok for the first passes of the renderer.0.98 is good for the final pass.Spacing = 4 Higher values of Spacing give a slightly faster render, butwill be more likely to miss fine details. I find that 4 isa pretty reasonable compromise. // 1: Initially, calculate an even grid of pixels across the Light Map.// For each pixel calculate the 3D coordinates of the centre of the patch that// corresponds to this pixel. Render a hemicube at that point, and add up// the incident light. Write that value into the Light Map.// The spacing in this grid is fixed. The code only comes here once per Light// Map, per render pass. for (y=0; y<Yres; y+=Spacing)for (x=0; x<Xres; x+=Spacing){SamplePoint = Calculate coordinates of centre of patchincidentLight = Calc_Incident_Light(SamplePoint, normal)LightMap[x, y] = incidentLight}// return here when another pass is requiredPasses_Loop:threshold = pow(Accuracy, Spacing)// 2: Part 1.HalfSpacing = Spacing/2;for (y=HalfSpacing; y<=Yres+HalfSpacing; y+=Spacing){for (x=HalfSpacing; x<=Xres+HalfSpacing; x+=Spacing){// Calculate the inbetween pixels, whose neighbours are above and below this pixelif (x<Xres) // Don't go off the edge of the Light Map now{x1 = xy1 = y-HalfSpacing// Read the 2 (left and right) neighbours from the Light MapI1 = LightMap[x1+HalfSpacing, y1]I2 = LightMap[x1-HalfSpacing, y1]// If the neighbours are very similar, then just interpolate.if ( (ratio2(I1.R,I2.R) > threshold) &&(ratio2(I1.G,I2.G) > threshold) &&(ratio2(I1.B,I2.B) > threshold) ){incidentLight.R = (I1.R+I2.R) * 0.5incidentLight.G = (I1.G+I2.G) * 0.5incidentLight.B = (I1.B+I2.B) * 0.5LightMap[x1, y1] = incidentLight}// Otherwise go to the effort of rendering a hemicube, and adding it all up.else{SamplePoint = Calculate coordinates of centre of patchincidentLight = Calc_Incident_Light(SamplePoint, normal)LightMap[x1, y1] = incidentLight}}// Calculate the inbetween pixels, whose neighbours are left and right of this pixelif (y<Yres) // Don't go off the edge of the Light Map now{x1 = x-HalfSpacingy1 = y// Read the 2 (up and down) neighbours from the Light MapI1 = LightMap[x1,y1-HalfSpacing];I2 = LightMap[x1,y1+HalfSpacing];// If the neighbours are very similar, then just interpolate.if ( (ratio2(I1.R,I2.R) > threshold) &&(ratio2(I1.G,I2.G) > threshold) &&(ratio2(I1.B,I2.B) > threshold) ){incidentLight.R = (I1.R+I2.R) * 0.5incidentLight.G = (I1.G+I2.G) * 0.5incidentLight.B = (I1.B+I2.B) * 0.5LightMap[x1,y1] = incidentLight}// Otherwise go to the effort of rendering a hemicube, and adding it all up.else{SamplePoint = Calculate coordinates of centre of patchincidentLight = Calc_Incident_Light(SamplePoint, normal)LightMap[x1, y1] = incidentLight}}//end if}//end x loop}//end y loop// 3: Part 2// Calculate the pixels, whose neighbours are on all 4 sides of this pixelfor (y=HalfSpacing; y<=(Yres-HalfSpacing); y+=Spacing){for (x=HalfSpacing; x<=(Xres-HalfSpacing); x+=Spacing){I1 = LightMap[x, y-HalfSpacing]I2 = LightMap[x, y+HalfSpacing]I3 = LightMap[x-HalfSpacing, y]I4 = LightMap[x+HalfSpacing, y]if ( (ratio4(I1.R,I2.R,I3.R,I4.R) > threshold) &&(ratio4(I1.G,I2.G,I3.G,I4.G) > threshold) &&(ratio4(I1.B,I2.B,I3.B,I4.B) > threshold) ){incidentLight.R = (I1.R + I2.R + I3.R + I4.R) * 0.25incidentLight.G = (I1.G + I2.G + I3.G + I4.G) * 0.25incidentLight.B = (I1.B + I2.B + I3.B + I4.B) * 0.25LightMap[x,y] = incidentLight}else{SamplePoint = Calculate coordinates of centre of patchincidentLight = Calc_Incident_Light(SamplePoint, normal)LightMap[x, y] = incidentLight;}}}Spacing = Spacing / 2Stop if Spacing = 1, otherwise go to Passes_Loop |
Point Light Sources
It is generally considered that Radiosity does not deal well with point light sources. This is true to some extent, but it is not impossible to have reasonable point light sources in your scene. I tried adding bright, point sized objects to my scenes, that were rendered as wu-pixels. When a hemicube was rendered, they would appear in the hemicube as bright points, thus shining light onto patches. They almost worked, but were subject to some unacceptable artifacts. The scene on the right was lit by three point spot lights; two on the pillars at the back, and one near the top-left, pointing towards the camera. The scene appears fine from this angle, but nasty artifacts are apparent if I turn the camera around. | |
You can see, on the bottom image, three dark lines along the wall and floor. These were caused by the the light source seeming to get lost at the very edges of the hemicubes. Perhaps this wouldn't have been so bad if I'd got my maths absolutely perfect and the edges of the hemicubes matched perfectly, but I'm sure that there would still have been noticable artifacts. So, rather than rendering the point lights onto the hemicubes, you can use ray tracing to cast the light from point sources onto patches. |
Optimising with 3D Rendering Hardware
One of the good things about Radiosity is that it's quite easy to optimise using any 32-bit 3D rendering hardware. As long as you can make it do straight texture mapping, with no shading, anti-aliasing, or mip-mapping, etc.How you go about this optimisation might not be quite what you expect, but it works well, letting the CPU and rendering hardware work together in parallel. The hardware handles the texture mapping and hidden surface removal (z-buffering), and the CPU handles the rest of the radiosity.
As far as I know, there is no rendering hardware that deals with floating point lighting values, or even lighting values above 255. So there is no point trying to get them to directly render scenes with such lighting. However, with a little subtlety, you can get them to do the texture mapping and hidden surface removal, while you put the lighting back in with a simple, fast loop.
If 3D hardware can write 32-bit pixels to the screen, then it can be made to write 32-bit values representing anything we want. 3D hardware can't write actual floating point RGBs to the screen, but it can write 32-bit pointers to the patches that should be rendered there. Once it's done that, you simply need to take each pixel, and use it's 32-bit value as an address to locate the patch that should have been rendered there.
Here is one of the patch maps from the scene above. Each pixel has a floating point value for Red, Green and Blue. And so 3D hardware will not be able to deal with this directly. | Now this is another map. It looks totally weird, but ignore how it looks for now. Each pixel in this map is actually a 32-bit value, which is the address of the corresponding pixel on the left. The reason the colours appear is because the lowest three bytes in the address are interpreted as colours. |
Once you make a whole set of these pointer textures (one for each surface in your scene), you can give them to the 3D hardware to render with them. The scene it comes out with will look something like this (right). The scene looks totally odd, but you can make out surfaces covered with patterns similar to the one above. The pixels should not be interpreted as colours, but as pointers. If your graphics card used 32-bit textures, then they will be in a form something like ARGB, with A, R G and B being 8-bit values. Ignore this structure and treat each pixel as a 32-bit value. Use them as memory pointers back to the patches that should be there, and recreate the scene properly with patches. Important: You must make sure that you render the scene purely texture mapped. That means: NO linear interpolation, NO anti-aliasing, NO motion blur, NO shading/lighting, NO Mip Mapping, NO Fog, NO Gamma Correction or anything else that isn't just a straight texture map. If you do not do this, the adresses produced will not point to the correct place, and your code will almost certainally crash. |
Misunderstanding and Confusion:
(What to do with the image once you've rendered it)
The output of a radiosity renderer is an image where each pixel consists of three floating point values, one for each of red, green and blue. The range of brightness values in this image may well be vast. As I have said before, the brightness of the sky is very much greater than the brightness of an average surface indoors. And the sun is thousands of times brighter than that. What do you do with such an image?Your average monitor can at best produce only dim light, not a lot brighter than a surface indoors. Clearly you cannot display your image directly on a monitor. To do this would require a monitor that could produce light as bright as the sun, and a graphics card with 32 bits per channel. These things don't exist for technical, not to mention safety, issues. So what can you do?
Most people seem to be happy to look at photographs and accept them as faithful representations of reality. They are wrong. Photographs are no better than monitors for displaying real-life bright images. Photographs cannot give off light as bright as the sun, but people never question their realism. Now this is where confusion sets in.
Human Vision
Our vision is just about the most important sense we have. Every day I trust my life to it, and so far it hasn't got me killed. Frequently it has saved my life and limb. This was an important sense for our ancestors too, right back to the very first fish or whatever we evolved from. Our eyeballs have had a long time to evolve and have been critical in our survival, and so they have become very good indeed. They are sensitive to very low light levels (the dimmest flash you can see is as dim as 5 photons), and yet can cope with looking at the very bright sky. Our eyeballs are not the only parts of our vision, perhaps even more important is the brain behind them. An increadibly sophisticated piece circuitry, poorly understood, and consisting of many layers of processing takes the output of our eyeballs and converts it to a sense of what actually exists infront of us. The brain has to be able to recognise the same objects no matter how they are lit, and actually does an amazing job of compensating for the different types of lighting we encounter. We don't even notice a difference when we walk from the outdoors lit by a bright blue sky, to the indoors lit by dim yellow lightbulbs. If you've ever tried to take photos in these two conditions, you may have had to change to a different type of film to stop your pictures coming out yellow and dark.Try this: Go out in a totally overcast day. Stand infront of something white. If you look at the clouds, you will see them as being grey, but look at the white object, and it appears to be white. So what? Well the white thing is lit by the grey clouds and so can't possibly be any brighter than them (in fact it will be darker), and yet we still perceive it to be white. If you don't believe me, take a photo showing the white thing and the sky in the background. You will see that the white thing looks darker than the clouds.
Don't trust your eyes: They are a hell of a lot smarter than you are.
So what can you do? Well, since people are so willing to accept photographs as representations of reality we can take the output of the renderer, which is a physical model of the light in a scene, and process this with a rough approximation of a camera film. I have already written an article on this: Exposure, so I will say no more about it here.
References
The Solid Map: Methods for Generating a 2璂 Texture Map for Solid Texturing: http://graphics.cs.uiuc.edu/~jch/papers/pst.pdf
This paper will be very useful if you are going to try to implement your own radiosity renderer. How do you apply a texture map evenly, and without distortion across some arbitary polygonal object? A radiosity renderer will need to do this.
Helios32: http://www.helios32.com/
Offers a platform-independent solution for developers looking for radiosity rendering capabilities.
Radiosity In English: http://www.flipcode.com/tutorials/tut_rad.shtml
As the title suggests this is an article about Radiosity, written using English words. I didn't understand it.
Real Time Radiosity: http://www.gamedev.net/reference/programming/features/rtradiosity2/
That sounds a little more exciting. There doesn't seem to be a demo though.
An Empirical Comparison of Radiosity Algorithms: http://www.cs.cmu.edu/~radiosity/emprad-tr.html
A good technical article comparing matrix, progressive, and wavelet radiosity algorithms. Written by a couple of the masters.
A Rapid Hierarchical Radiosity Algorithm: http://graphics.stanford.edu/papers/rad/
A paper that presents a rapid hierarchical radiosity algorithm for illuminating scenes containing large polygonal patches.
KODI's Radiosity Page : http://ls7-www.informatik.uni-dortmund.de/~kohnhors/radiosity.html
A whole lot of good radiosity links.
Graphic Links: http://web.tiscalinet.it/GiulianoCornacchiola/Eng/GraphicLinks6.htm
Even more good links.
Rover: Radiosity for Virtual Reality Systems: http://www.scs.leeds.ac.uk/cuddles/rover/
*Very Good* A thesis on Radiosity. Contains a large selection of good articles on radiosity, and very many abstracts of papers on the subject.
Daylighting Design: http://www.arce.ukans.edu/book/daylight/daylight.htm
A very indepth article about daylight.