Spacecraft Radiation Mapping #895
-
Hello Mitsuba Community, I am a PhD student utilising Mitsuba for my research on solar radiation pressure (SRP) modelling, specifically for spacecraft. My academic foundation is in astrodynamics, not rendering; hence, I appreciate your patience with any naive questions or oversights. Objective: Current Methodology:
2.Surface Interaction Points: With a camera, I accumulate multiple surface interaction points on the object, saving their shader frames and positions. I rotate the camera around the object, continually documenting these points. Subsequently, I resample these points for even distribution (I specify the spacing myself) on the object's surface using KDTrees and save the s, t, n vectors for each. For ray generation: ray_origin_local = mi.Vector3f(x, y, 0)
ray_origin = mi.Frame3f(cam_dir).to_world(ray_origin_local) + cam_origin
camera_rays = mi.Ray3f(o=ray_origin, d=cam_dir)
si = scene.ray_intersect(camera_rays) 3.Hemisphere of Sensors: I then generate a unit hemisphere of orthographic sensors (where the number of sensors in this sensor hemisphere is defined manually). 4.Sensor Application: I apply this hemisphere of sensors to each of the surface interaction points in the list (from step 3). So if my object ends up having 700 surface interaction points, and I decide to make 10 hemisphere partitions per point, I will have 7000 sensors. Since each shader frame has a local s,t,n reference frame, I rotate the hemisphere to be in the correct orientation for each of these points. 5.Rendering: Then I render each of these sensors, and take the average value of all the pixels in the rendered image (I wonder if this is not part of my problem). 6.Vector Averaging: Now for each point on my object I have radiation 10 vectors, each pointing in the direction that the sensor was pointing and scaled by the average strength of the pixels in the image rendered. I average the 10 vectors at each location so that each surface interaction point on the surface of my object now has an associated magnitude and strength of radiation. Issue: I believe that this may be in part because the sensors at each location on the surface of the object are not rendering the light directly coming from the emitter, but rather only the light that is reflected from the surface of the object. I have come to this conclusion after inspecting the individual renders that are pointing to the light source: they are blank (black). I've perused the documentation, but I'm still uncertain about how to configure the sensors to capture this direct light or if there is a better of doing this altogether... Despite experimenting with various integrators and light sources, I've had no success. I'm contemplating if a distinct sensor type might be the solution. The irradiance meter seemed to be a good match for my needs but I was not able to make it work on any .obj. Please find attached sample outputs of my method on a unit sphere: I trust this summary clearly conveys my efforts. While I recognise that a thorough review of my approach might be demanding, I'd deeply appreciate any insights or suggestions you can offer. Thank you for your time, and for offering this exceptional tool to the community. Best regards, |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 6 replies
-
This is definitely not something I was expecting 😄 -- it's cool to see Mitsuba being used in such a context. I do need a few clarifications:
This is somewhat expected. Your scene is built with a few hard constraints: the
The |
Beta Was this translation helpful? Give feedback.
-
Hi @njroussel, 👋 😄 I hope you are well! I'm back with a few more queries, and I must apologize for the length of this post. Your insights thus far have been immensely valuable, and I completely understand if your time constraints don't permit a detailed response to these follow-up questions. For reference, here is the current state of my hemispherical sensor plugin: def square_to_uniform_hemisphere(sample, scale_factor=1.0):
phi = 2 * sample.x * dr.pi
# Use the second sample dimension to get sin(theta) and cos(theta)
cos_theta = sample.y
sin_theta = dr.sqrt(1.0 - cos_theta * cos_theta)
# Convert spherical to Cartesian coordinates
x = dr.cos(phi) * sin_theta
y = cos_theta
z = -dr.sin(phi) * sin_theta
return scale_factor * mi.Vector3f(x, y, z)
class HemisphericalCamera(mi.Sensor):
"""Defines a hemispherical sensor with inward-pointing rays."""
ray_origins = []
ray_directions = []
RTN = np.empty((0, 3, 3), float)
def __init__(self, props=mi.Properties()):
super().__init__(props)
#scale the size of the sensor relative to world units
self.scale_factor = props.get('scale_factor', 1)
# Add the hemisphere_center to the camera properties
self.hemisphere_center = props.get('hemisphere_center', mi.Vector3f(0, 0, 0))
# Compute the RTN vectors for this sensor instance (for visualization)
self.compute_and_store_RTN()
def compute_and_store_RTN(self):
# Assuming the default 'up' direction and 'north' for the sensor when it's created
up = np.array([0.0, 1.0, 0.0]) # Y-up convention
north = np.array([0.0, 0.0, 1.0]) # Adjusted "north" towards positive Z since we're using a left-handed system in Mitsuba
# Calculate the RTN vectors based on 'up' and 'north'
r = up # Radial vector pointing outwards from the dome's base
t = np.cross(north, up) # Changed the order for left-handed system
t /= np.linalg.norm(t) # Normalize the 't' vector
n = np.cross(r, t) # Normal, points towards the dome center, perpendicular to 'r' and 't'
# Convert the RTN vectors from NumPy arrays back to Mitsuba's Vector3f objects
r_mitsuba = mi.Vector3f(r[0], r[1], r[2])
t_mitsuba = mi.Vector3f(t[0], t[1], t[2])
n_mitsuba = mi.Vector3f(n[0], n[1], n[2])
#convert to world coordinates
r_vec = self.world_transform() @ r_mitsuba
t_vec = self.world_transform() @ t_mitsuba
n_vec = self.world_transform() @ n_mitsuba
rtn_array = np.array([r_vec, t_vec, n_vec], dtype=float).reshape(1,3,3)
# Append the RTN vectors for this instance to the class-level list
HemisphericalCamera.RTN = np.concatenate((HemisphericalCamera.RTN, rtn_array), axis=0)
def sample_ray(self, time, wavelength_sample, position_sample, aperture_sample, active=True):
#this function is called during the path tracing integrator call
wavelengths, wav_weight = self.sample_wavelengths(dr.zeros(mi.SurfaceInteraction3f),
wavelength_sample, active)
center = self.world_transform().translation()
local_o = square_to_uniform_hemisphere(position_sample, self.scale_factor)
o = self.hemisphere_center # Using the hemisphere_center property to adjust the ray's origin
o += self.world_transform() @ local_o
d = center - o
# Append the ray's origin and direction to the class-level lists for visualization
self.ray_origins.append(o)
self.ray_directions.append(d)
return mi.Ray3f(o, d, time, wavelengths), wav_weight
def sample_ray_differential(self, time, wavelength_sample, position_sample, aperture_sample, active=True):
ray, weight = self.sample_ray(time, wavelength_sample, position_sample, aperture_sample, active)
return mi.RayDifferential3f(ray), weight
def sample_direction(self, it, sample, active=True):
# this function is called during the particle tracing integrator call
# this is not useful though because the rays are not sampled according to the "square_to_uniform_hemisphere" function
trafo = self.world_transform()
ref_p = trafo.inverse() @ it.p
d = mi.Vector3f(ref_p)
dist = dr.norm(d)
inv_dist = 1.0 / dist
d *= inv_dist
resolution = self.film().crop_size()
ds = dr.zeros(mi.DirectionSample3f)
ds.uv = mi.Point2f(dr.atan2(d.x, -d.z) * dr.inv_two_pi, dr.safe_acos(d.y) * dr.inv_pi / 2.0)
ds.uv.x -= dr.floor(ds.uv.x)
ds.uv *= resolution
sin_theta = dr.safe_sqrt(1 - d.y * d.y)
ds.p = trafo.translation()
ds.d = (ds.p - it.p) * inv_dist
ds.dist = dist
ds.pdf = dr.select(active, 1.0, 0.0)
weight = (1 / (2 * dr.pi * dr.pi * dr.maximum(sin_theta, dr.epsilon(mi.Float)))) * dr.sqr(inv_dist)
return ds, mi.Spectrum(weight)
mi.register_sensor("hemispherical", lambda props: HemisphericalCamera(props)) Good news:
Bad news:
BSDF/Integrator IssuesHaving reviewed some relevant posts, notably (#376) and (#365), it seemed My initial attempt at deciding what integrator to use involved setting up a render with a 'emitter': {
'type': 'directional',
'direction': [1.0, 0.0, 0.0],
'irradiance': {
'type': 'rgb',
'value': 1367,
},
},
'sphere' : {
'type': 'sphere',
'radius': 1.0,
'bsdf': {
'type': 'diffuse',
'reflectance': {
'type': 'rgb',
'value': [1, 1, 1]
}
},
},
'sensor': {
'type': 'perspective',
'to_world': mi.ScalarTransform4f.look_at(origin=[-5, 0, 0],
target=[0, 0, 0],
up=[0, 0, 1]), Querying the pixel at the center of the image I get a value of Switching to a highly specular BSDF, however, results in physically implausible values. "RoughBSDF": {
"type": "roughconductor",
"material": "none",
"alpha": 0.08,
"distribution": "ggx",
},
'sphere': {
'type': 'sphere',
'radius': 1,
"bsdf": {
"type": "ref",
"id": "RoughBSDF",
},
} With I'm at a loss interpreting these results. Are any of these readings physically accurate? Does the 'path' integrator have an "upper" limit of 0.28 and lower limit of 0.9 for rendering specular and diffuse surfaces? Is such a calibration valid at all? In addition to returning very similar values to the Hemisphere IssuesThis is what one instance of my current sensor looks like (only 1/10000 rays plotted here) when positioned on a sphere directly opposite the incoming directional light source: Placing 200 sensors around a unit sphere with seems to model the physical behaviour well enough within a certain range of alpha values [0.28-0.9], but anything outside these bounds starts to lose physical sense... Low alpha (0.28) Highest mean sensor pixel value ~1400 High Alpha(0.85) Highest mean sensor pixel value ~420 In my sensor plugin I have defined 2 "extra" parameters:
To my confusion, altering As always I'm genuinely grateful for any insight you can offer on this matter. I also want to express my commitment to appropriately and honestly acknowledging your contributions in this work. Should my work here lead to any future publications or presentations, I will ensure credit is given in a manner that accurately reflects your intellectual input and guidance, as per your wishes and academic norms. Thank you once again for your time and expertise! Charles |
Beta Was this translation helpful? Give feedback.
-
Hello @njroussel, 👋 🤠 As per your recommendation I developed a scaling function for the pixels and am pleased to share that the values I am now obtaining are making perfect physical sense across various geometries when using the diffuse BSDF 🎉 🥳 I have one more question that I hope you might be able to shed some light on. Could you please tell me if there is a list of the BSDFs in Mitsuba that do adhere to the principles of energy conservation? I have looked in the docs but was not able to find a clear list... Looking ahead, provided I can find suitable BSDFs that approximate spacecraft material properties, my intention is to benchmark this method against state-of-the-art spacecraft radiation pressure mapping methods. All code generated in this process will be made available openly. I am keen on ensuring that your contributions to this endeavour are properly acknowledged in a manner that suits you best so I plan to reconnect once I have more results in hand to discuss your preferred mode of acknowledgment. Once again, thank you for sharing your time and expertise! Best, |
Beta Was this translation helpful? Give feedback.
Great 🎉 !
We unfortunately don't have this. In short, any material that relies on microfacet theory (matierals which have a roughness parameter typically) are not energy-conserving. I think that leaves only the
diffuse
,dielectric
andconductor
.Note that the exact energy loss is usually known and can be compensated for. Sometimes, this loss …