diff --git a/.gitignore b/.gitignore index 89942d9e..dc26d7a6 100644 --- a/.gitignore +++ b/.gitignore @@ -23,7 +23,8 @@ build .LSOverride # Icon must end with two \r -Icon +Icon + # Thumbnails ._* @@ -558,3 +559,10 @@ xcuserdata *.xccheckout *.moved-aside *.xcuserstate +.vscode/* +renders/* +ebon_hawk/* +scenes/ebon_hawk/* +old_renders/* +oldFinalRenders/* + diff --git a/README.md b/README.md index 110697ce..c5d93118 100644 --- a/README.md +++ b/README.md @@ -1,13 +1,226 @@ CUDA Path Tracer ================ +![](finalRenders/ebon_final.png) **University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 3** -* (TODO) YOUR NAME HERE -* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab) +* Richard Chen +* Tested on: Windows 11, i7-10875H @ 2.3GHz 16GB, RTX 2060 MAX-Q 6GB (PC) -### (TODO: Your README) +## Overview -*DO NOT* leave the README to the last minute! It is a crucial part of the -project, and we will not be able to grade you without a good README. +Path tracing is a rendering technique where light rays are shot out from the "camera" +into the scene. Whenever it meets a surface, we track how the ray gets attenuated and scattered. +This allows for more accurate rendering at the cost of requiring vast amounts of computation. +Fortunately, since photons do not (ignoring relativity) interact with each other, +this is very parallelizable, a perfect fit for running on a GPU. +Cornell Box Inspired | Very Fast Shiny Cow +---------------------|------------------ +![](finalRenders/cornell_demo1.png) | ![](finalRenders/hyperspace_cow.png) +
+ + +## Features +
+ +* Diffuse surfaces +Since most surfaces are not microscopically smooth, incoming light can leave in any direction. +![](finalRenders/cornell_dfl.png) +
+ +* Specular reflection +Smooth surfaces reflect light neatly about the surface normal, like a mirror does. +![](finalRenders/cornell_specular.png) +
+ +* Dielectrics with Schlick's Approximation and Snell's Law +Light moves at different speeds through different mediums and this can cause light +to refract and/or reflect. In these examples, glass and air are used with indices of refractions +of 1.5 and 1, respectively. The further the incoming light is from the surface normal, the more likely +it is to reflect. +![](finalRenders/cornell_dielectric.png) +
+ +* Anti Aliasing via Stochastic Sampling +As opposed to classical antialiasing which involves super-sampling an image and is thus very computationally +expensive, stochastic sampling wiggles the outgoing ray directions slightly. This reduces the jagged artifacts +from aliasing at the cost of more noise, but does not involve shooting extra photons per pixel. +Notice how the left edge of the sphere is not nearly as jagged in the anti-aliased version +![](finalRenders/cornell_antialiasing.png) +
+ +* Depth of Field/Defocus Blur +Despite modelling the rays as shooting out from an infinitesimal point, real life cameras have a lens +through which the light passes. Further, the laws of physics also prevent light from being infinitely focused. +With cameras, this means that objects further away from the focal length will be blurrier. In ray tracing, the origin points of the light rays are wiggled in a manner consistent with approximating a lens. +![](finalRenders/defocus_blur.png) +
+ +* Obj Mesh Loading +While cubes and spheres are a great point to start off, one of the great joys in life is +to render Mario T-Posing. Many 3d models are available from the internet, with most of them +being meshes composed of triangles. I used [tinyObj](https://github.com/tinyobjloader/tinyobjloader) to load models that were of the Wavefront OBJ file format. +![](finalRenders/cornell_mario.png) +
+ +* Textures from files +While it is theoretically possible to specify material properties for each shape in a scene, +this can be untenable when working with thousands of shapes, let alone millions. +Instead, it is common to use textures, images where the color encodes useful data. Then, +rather than giving every vertex all of its data, it can associate them with texture coordinates +and look up the corresponding data only when relevant. I focused on textures that encoded +base color, tangent space normal mapping, ambient occlusion/roughness/metallicity, and emissivity. +I also set the background in a few renders to a texture rather than just having it fade to black, lest they be way too dark +![](finalRenders/texture_cow.png) +
+ +* Normal Mapping Texture Adaptation +The normal vector at a location allows for computing reflections, refractions, and more since +knowing it allows one to calculate the angle of incidence. Technically, it is a co-vector but +a consensus has been reached for how the order of vertices in a triangle directs its planar normal. +At its most basic, each triangle contains enough information to calculate its normal. +However, meshes composed of polygons are often used to model smooth objects, so it is common +to associate each vertex with a normal vector. Then, for a point inside a triangle, one can +interpolate between the vertex normals to get a normal for a particular location. +Imagine a brick wall. The mortar crevices could be modelled by adding who knows how many new triangles. +Alternatively, by turning the surface normals into a texture, they can be sampled as needed without weighing +down the system with extra computations. Bump maps and height maps accomplish something very similar, but +normal maps themselves come in two varieties: Object space and Tangent space. Object space maps let one directly +sample the rgb components and associate them with the normal's xyz values. Tangent space normal maps involve +a perspective shift so that the interpolated norm is pointing straight up. This requires some extra computation but is generally preferred due to its flexibility. The change of basis matrix TBN requires the namesake tangent, bitangent, and normal of which the normal is just the triangles planar norm. The other two can be relatively easily computed from the uv/texture coordinates of the vertices. To save on computation, I precompute them when loading in a mesh rather than need to recompute them every time they need to check the normal map. +![](finalRenders/tangent_space_normal_map.png) +
+ +* Physically Based Rendering Texture Adaptation + +Just the base color | With more effects +-------------------------|------------------------- +![](finalRenders/hawk_viewport.png) | ![](finalRenders/preproc_background.png) + +
+Nowadays, many people use metallic/roughness and Albedo instead of diffuse/specular. +I found a mesh (and its accompanying textures) that used this information so I had to +figure out how to adapt to this. Due to vastly different behaviors between dielectrics +and conductors, metallicity the concept is treated almost as a Boolean value, with gradations +encoding how to attenuate the metallic behavior. Physically based rendering tries to use +more physics to enable more realistic rendering. Light is split into refractive and reflective +components and metals will absorb the refractive component whilst dielectrics will scatter both, +with the resultant having both a specular and a diffuse portion. +The roughness also has a varying effect predicated upon metallicity. And lastly there is an +ambient occlusion map that describes how an area might be darker than expected. +This seems to be more tailored towards rasterization as the nature of path tracing means +areas which would be occluded more just will not bounce the light rays back towards light sources. +The theory goes much deeper, having had whole textbooks written about them but just scratching the +surface let me translate the textures and make the model look cool. +
+ +* Background Texture + + Raw Texture | Single Pass | Two Passes + ------------|-------------|-------------- + ![](finalRenders/hawk_nopasses.png) | ![](finalRenders/preproc_background.png) | ![](finalRenders/hawk_darksky.png) + +
+As a Star Wars fan, my thoughts naturally drifted towards making the ship look like it was +in hyperspace. This is also motivated by the fact that with path tracing, I could not think of how +to simulate ambient lighting, and was afraid I would need to sprinkle many small luminescent orbs +around the scene just to be able to see anything. Once, I found a cool background, I was next concerned +with how to map the rectangular texture onto the viewable sphere. By exploiting the symmetry of the hyperspace effect looking like a tunnel, this was made easy but for more complex backgrounds, this +would require more investigation. As it currently stands, for the unit sphere, x^2+y^2+z^2 = 1. +Z is constrained by the x and y values enabling us to map the view direction vector to UV coordinates. +When z = 1, x = 0, y = 0 which maps to uvs of (0.5, 0.5). By extension, this should scoop a unit circle +out of the texture that points would actually map to, with (x, y, z) and (x, y, -z) mapping to the same +uv coordinates ensuring a smooth transition. Then, I rotated the mesh to align with the tunnel direction +and voila, it looks like it is cruising through hyperspace. + +The next issue is that the sky was too bright and too blue, since I was using the bright color for global illumination as well. So I interpolated the texture color between +black and itself based on its brightness, as I wanted the bright streaks to remain but the space between +them to be darker. Then I did it again since it worked well the first time. +
+ + +## Performance Analysis + +![](img/performanceGraph.png) + +* The scene used was a Cornell box with different colored walls, many spheres and cubes that were shiny or glass, and Mario T-Posing menacingly in the back with an emissive texture +* Caching the First Intersection + * The rays start out at a known location and shoot into a screen pixel and then into the scene so it + makes sense to precompute the data on the first intersection for future computations + * Since antialiasing and depth of field are cool effects that add randomness and break this optimization, this optimization is worthless +* Sorting Rays by Material + * When there are multiple materials, checking the rays in order introduces branch divergence since materials will interact with the rays differently. Instead sort the rays by material so that there will be less divergence + * Meshes, which are expensive to check, count as a singular material so this optimization is situationally helpful if the scene in question has many different kinds of materials +* Compact Dead Threads + * If a ray terminates early, remove it from the pool so that there are fewer rays to check + * Especially when the geometry of the scene is very open, this optimization is very beneficial +* Mesh Intersection Test via AABB + * Rather than check collisions against every triangle, associate each mesh with its min and max bounds + for an axis aligned bounding box (AABB) and only check intersection with triangles if it intersects the bounds + * Especially if a mesh fits tightly inside a box, this optimization is very helpful. But if the mesh is irregularly shaped enough that the AABB encompasses the whole scene anyway, it would be less useful. + * The Ebon Hawk model has more than 69,000 triangles and fits relatively neatly into its AABB so this + is extremely useful + + + + +## Debug Views + +Texture | Normal | Depth +--------|--------|------- +![](finalRenders/texture_cube.png) | ![](finalRenders/debug_normal_cube_tilted.png) | ![](finalRenders/debug_depth_cube.png) + +## Headache Inducing Bugs + +Buggy | Fixed | Cause +------|-------|------- +![](finalRenders/ebonhawk_surface_normals.png) | ![](finalRenders/hawk_norm_interp.png) | Bad Normal Interpolation +![](finalRenders/hawk_norm_interp.png) | ![](finalRenders/debug_texture_base_color.png) | Tiling uv coordinates can be negative +![](finalRenders/debug_no_norm_blending.png) | ![](finalRenders/debug_norm_interp_working.png) | The obj did not have normals that were meant to be interpolated +![](finalRenders/debug_cow_normals.png) | ![](finalRenders/debug_cow_normals.png) | I thought the cow would have interpolated normals but it did not so it was not actually a bug. + +Bug | Cause +----|-------- +![](finalRenders/cornell_badrng.png) | Bad RNG Seeding +![](finalRenders/bug_mesh_triangle_shiny.png) | Checking the explicit error condition t == -1 but forgetting to also eliminate the general t < 0 bad case +![](finalRenders/objLoadingCow.png) | Triangle Intersection was wrong + + +## Further Work +* Heavily optimize the performance with a special focus on reducing branch divergence +* Refactoring the code to be more structured and less haphazard +* Changing the option toggles from `#define` macros to Booleans so changing does not require +lengthy recompilation +* Dive deeper into PBR to make everything look cooler like making the coppery parts shinier in a realistic way that is not just sidestepping the material behaviors +* Learn about the Disney BSDF and the GGX equation +* How to interpolate normals from a tangent space normal map + * Refactor with separate position, normal, uv, index, etc. buffers rather than cramming everything into triangle + * use gltf to load instead of obj + * mikktspace algorithm +* Support for multiple mesh importing + +## Other +* Special thanks to lemonaden for creating a free, high quality mesh of the Ebon Hawk https://sketchfab.com/3d-models/ebon-hawk-7f7cd2b43ed64a4ba628b1bb5398d838 +* Ray Tracing in One Weekend +* IQ's list of intersector examples and the Scenes & Ray Intersection slides from UCSD's CSE168 course by Steve Rotenberg that helped me understand how Möller–Trumbore worked +* UT Austin's CS384 slides on normal mapping tangent that explained the theory on how to convert from tangent space normals to object space and https://stackoverflow.com/questions/5255806/how-to-calculate-tangent-and-binormal for explaining the calculations in a way that did not seem like abuse of matrix notation +* https://wallpaperaccess.com/star-wars-hyperspace for the cool hyperspace wallpaper +* Adobe's articles on the PBR Metallic/Roughness workflow that explained the theory behind it +* reddit user u/cowpowered for tips on performing normal interpolation when working with normal maps and tbns + + + + + + + + diff --git a/external/include/json.hpp b/external/include/json.hpp new file mode 100644 index 00000000..c9af0bed --- /dev/null +++ b/external/include/json.hpp @@ -0,0 +1,20406 @@ +/* + __ _____ _____ _____ + __| | __| | | | JSON for Modern C++ +| | |__ | | | | | | version 3.5.0 +|_____|_____|_____|_|___| https://github.com/nlohmann/json + +Licensed under the MIT License . +SPDX-License-Identifier: MIT +Copyright (c) 2013-2018 Niels Lohmann . + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. +*/ + +#ifndef NLOHMANN_JSON_HPP +#define NLOHMANN_JSON_HPP + +#define NLOHMANN_JSON_VERSION_MAJOR 3 +#define NLOHMANN_JSON_VERSION_MINOR 5 +#define NLOHMANN_JSON_VERSION_PATCH 0 + +#include // all_of, find, for_each +#include // assert +#include // and, not, or +#include // nullptr_t, ptrdiff_t, size_t +#include // hash, less +#include // initializer_list +#include // istream, ostream +#include // random_access_iterator_tag +#include // accumulate +#include // string, stoi, to_string +#include // declval, forward, move, pair, swap + +// #include +#ifndef NLOHMANN_JSON_FWD_HPP +#define NLOHMANN_JSON_FWD_HPP + +#include // int64_t, uint64_t +#include // map +#include // allocator +#include // string +#include // vector + +/*! +@brief namespace for Niels Lohmann +@see https://github.com/nlohmann +@since version 1.0.0 +*/ +namespace nlohmann +{ +/*! +@brief default JSONSerializer template argument + +This serializer ignores the template arguments and uses ADL +([argument-dependent lookup](https://en.cppreference.com/w/cpp/language/adl)) +for serialization. +*/ +template +struct adl_serializer; + +template class ObjectType = + std::map, + template class ArrayType = std::vector, + class StringType = std::string, class BooleanType = bool, + class NumberIntegerType = std::int64_t, + class NumberUnsignedType = std::uint64_t, + class NumberFloatType = double, + template class AllocatorType = std::allocator, + template class JSONSerializer = + adl_serializer> +class basic_json; + +/*! +@brief JSON Pointer + +A JSON pointer defines a string syntax for identifying a specific value +within a JSON document. It can be used with functions `at` and +`operator[]`. Furthermore, JSON pointers are the base for JSON patches. + +@sa [RFC 6901](https://tools.ietf.org/html/rfc6901) + +@since version 2.0.0 +*/ +template +class json_pointer; + +/*! +@brief default JSON class + +This type is the default specialization of the @ref basic_json class which +uses the standard template types. + +@since version 1.0.0 +*/ +using json = basic_json<>; +} // namespace nlohmann + +#endif + +// #include + + +// This file contains all internal macro definitions +// You MUST include macro_unscope.hpp at the end of json.hpp to undef all of them + +// exclude unsupported compilers +#if !defined(JSON_SKIP_UNSUPPORTED_COMPILER_CHECK) + #if defined(__clang__) + #if (__clang_major__ * 10000 + __clang_minor__ * 100 + __clang_patchlevel__) < 30400 + #error "unsupported Clang version - see https://github.com/nlohmann/json#supported-compilers" + #endif + #elif defined(__GNUC__) && !(defined(__ICC) || defined(__INTEL_COMPILER)) + #if (__GNUC__ * 10000 + __GNUC_MINOR__ * 100 + __GNUC_PATCHLEVEL__) < 40800 + #error "unsupported GCC version - see https://github.com/nlohmann/json#supported-compilers" + #endif + #endif +#endif + +// disable float-equal warnings on GCC/clang +#if defined(__clang__) || defined(__GNUC__) || defined(__GNUG__) + #pragma GCC diagnostic push + #pragma GCC diagnostic ignored "-Wfloat-equal" +#endif + +// disable documentation warnings on clang +#if defined(__clang__) + #pragma GCC diagnostic push + #pragma GCC diagnostic ignored "-Wdocumentation" +#endif + +// allow for portable deprecation warnings +#if defined(__clang__) || defined(__GNUC__) || defined(__GNUG__) + #define JSON_DEPRECATED __attribute__((deprecated)) +#elif defined(_MSC_VER) + #define JSON_DEPRECATED __declspec(deprecated) +#else + #define JSON_DEPRECATED +#endif + +// allow to disable exceptions +#if (defined(__cpp_exceptions) || defined(__EXCEPTIONS) || defined(_CPPUNWIND)) && !defined(JSON_NOEXCEPTION) + #define JSON_THROW(exception) throw exception + #define JSON_TRY try + #define JSON_CATCH(exception) catch(exception) + #define JSON_INTERNAL_CATCH(exception) catch(exception) +#else + #define JSON_THROW(exception) std::abort() + #define JSON_TRY if(true) + #define JSON_CATCH(exception) if(false) + #define JSON_INTERNAL_CATCH(exception) if(false) +#endif + +// override exception macros +#if defined(JSON_THROW_USER) + #undef JSON_THROW + #define JSON_THROW JSON_THROW_USER +#endif +#if defined(JSON_TRY_USER) + #undef JSON_TRY + #define JSON_TRY JSON_TRY_USER +#endif +#if defined(JSON_CATCH_USER) + #undef JSON_CATCH + #define JSON_CATCH JSON_CATCH_USER + #undef JSON_INTERNAL_CATCH + #define JSON_INTERNAL_CATCH JSON_CATCH_USER +#endif +#if defined(JSON_INTERNAL_CATCH_USER) + #undef JSON_INTERNAL_CATCH + #define JSON_INTERNAL_CATCH JSON_INTERNAL_CATCH_USER +#endif + +// manual branch prediction +#if defined(__clang__) || defined(__GNUC__) || defined(__GNUG__) + #define JSON_LIKELY(x) __builtin_expect(!!(x), 1) + #define JSON_UNLIKELY(x) __builtin_expect(!!(x), 0) +#else + #define JSON_LIKELY(x) x + #define JSON_UNLIKELY(x) x +#endif + +// C++ language standard detection +#if (defined(__cplusplus) && __cplusplus >= 201703L) || (defined(_HAS_CXX17) && _HAS_CXX17 == 1) // fix for issue #464 + #define JSON_HAS_CPP_17 + #define JSON_HAS_CPP_14 +#elif (defined(__cplusplus) && __cplusplus >= 201402L) || (defined(_HAS_CXX14) && _HAS_CXX14 == 1) + #define JSON_HAS_CPP_14 +#endif + +/*! +@brief macro to briefly define a mapping between an enum and JSON +@def NLOHMANN_JSON_SERIALIZE_ENUM +@since version 3.4.0 +*/ +#define NLOHMANN_JSON_SERIALIZE_ENUM(ENUM_TYPE, ...) \ + template \ + inline void to_json(BasicJsonType& j, const ENUM_TYPE& e) \ + { \ + static_assert(std::is_enum::value, #ENUM_TYPE " must be an enum!"); \ + static const std::pair m[] = __VA_ARGS__; \ + auto it = std::find_if(std::begin(m), std::end(m), \ + [e](const std::pair& ej_pair) -> bool \ + { \ + return ej_pair.first == e; \ + }); \ + j = ((it != std::end(m)) ? it : std::begin(m))->second; \ + } \ + template \ + inline void from_json(const BasicJsonType& j, ENUM_TYPE& e) \ + { \ + static_assert(std::is_enum::value, #ENUM_TYPE " must be an enum!"); \ + static const std::pair m[] = __VA_ARGS__; \ + auto it = std::find_if(std::begin(m), std::end(m), \ + [j](const std::pair& ej_pair) -> bool \ + { \ + return ej_pair.second == j; \ + }); \ + e = ((it != std::end(m)) ? it : std::begin(m))->first; \ + } + +// Ugly macros to avoid uglier copy-paste when specializing basic_json. They +// may be removed in the future once the class is split. + +#define NLOHMANN_BASIC_JSON_TPL_DECLARATION \ + template class ObjectType, \ + template class ArrayType, \ + class StringType, class BooleanType, class NumberIntegerType, \ + class NumberUnsignedType, class NumberFloatType, \ + template class AllocatorType, \ + template class JSONSerializer> + +#define NLOHMANN_BASIC_JSON_TPL \ + basic_json + +// #include + + +#include // not +#include // size_t +#include // conditional, enable_if, false_type, integral_constant, is_constructible, is_integral, is_same, remove_cv, remove_reference, true_type + +namespace nlohmann +{ +namespace detail +{ +// alias templates to reduce boilerplate +template +using enable_if_t = typename std::enable_if::type; + +template +using uncvref_t = typename std::remove_cv::type>::type; + +// implementation of C++14 index_sequence and affiliates +// source: https://stackoverflow.com/a/32223343 +template +struct index_sequence +{ + using type = index_sequence; + using value_type = std::size_t; + static constexpr std::size_t size() noexcept + { + return sizeof...(Ints); + } +}; + +template +struct merge_and_renumber; + +template +struct merge_and_renumber, index_sequence> + : index_sequence < I1..., (sizeof...(I1) + I2)... > {}; + +template +struct make_index_sequence + : merge_and_renumber < typename make_index_sequence < N / 2 >::type, + typename make_index_sequence < N - N / 2 >::type > {}; + +template<> struct make_index_sequence<0> : index_sequence<> {}; +template<> struct make_index_sequence<1> : index_sequence<0> {}; + +template +using index_sequence_for = make_index_sequence; + +// dispatch utility (taken from ranges-v3) +template struct priority_tag : priority_tag < N - 1 > {}; +template<> struct priority_tag<0> {}; + +// taken from ranges-v3 +template +struct static_const +{ + static constexpr T value{}; +}; + +template +constexpr T static_const::value; +} // namespace detail +} // namespace nlohmann + +// #include + + +#include // not +#include // numeric_limits +#include // false_type, is_constructible, is_integral, is_same, true_type +#include // declval + +// #include + +// #include + + +#include // random_access_iterator_tag + +// #include + + +namespace nlohmann +{ +namespace detail +{ +template struct make_void +{ + using type = void; +}; +template using void_t = typename make_void::type; +} // namespace detail +} // namespace nlohmann + +// #include + + +namespace nlohmann +{ +namespace detail +{ +template +struct iterator_types {}; + +template +struct iterator_types < + It, + void_t> +{ + using difference_type = typename It::difference_type; + using value_type = typename It::value_type; + using pointer = typename It::pointer; + using reference = typename It::reference; + using iterator_category = typename It::iterator_category; +}; + +// This is required as some compilers implement std::iterator_traits in a way that +// doesn't work with SFINAE. See https://github.com/nlohmann/json/issues/1341. +template +struct iterator_traits +{ +}; + +template +struct iterator_traits < T, enable_if_t < !std::is_pointer::value >> + : iterator_types +{ +}; + +template +struct iterator_traits::value>> +{ + using iterator_category = std::random_access_iterator_tag; + using value_type = T; + using difference_type = ptrdiff_t; + using pointer = T*; + using reference = T&; +}; +} +} + +// #include + +// #include + + +#include + +// #include + + +// http://en.cppreference.com/w/cpp/experimental/is_detected +namespace nlohmann +{ +namespace detail +{ +struct nonesuch +{ + nonesuch() = delete; + ~nonesuch() = delete; + nonesuch(nonesuch const&) = delete; + void operator=(nonesuch const&) = delete; +}; + +template class Op, + class... Args> +struct detector +{ + using value_t = std::false_type; + using type = Default; +}; + +template class Op, class... Args> +struct detector>, Op, Args...> +{ + using value_t = std::true_type; + using type = Op; +}; + +template