C2M-1.2-Release-notes

Below are all the changes in the new version.
If you are able to come up with a great combination, please let us know so we can post it.

Installation Pitfalls
————————

– 3dsmax 2014 and Nitrious: The first builds of Max2014 seem to have a bug. This results in
the C2m-Nitrious viewport not working (either nothing visible or hanging). For me a it works after SP3 is
installed (through the MaxExtension). I cannot say if it will work for earlier or other SPs though.
– Installing C2M 1.2 over 1.1 to be used with 3dsmax 2014: To avoid problems, I would recommend 1) deinstall C2M 1.1
AND 2) Install C2M 1.2 into a *another* directory. Otherwise you will most probably manually need to remove a plugin path from 3dsMax, which keeps a copy for all previously used pathes, although they have already been removed.

Update Guide:
———————-

If you have a scene saved with C2M 1.1 you should do the following
to get the full benefit out of some features:
1) “Reanalyse” the scene (Formely Rebuil Normals+Sizes Button)
2) Save the scene. Reload if it does not autmatically.
3) Apply the new Default material, or manually add a corresponding Opacity Map

Rendering Notes:
———————-

– If “Details” > 1 then the renderer now assumes an opacity map is assigned to the Object.
See “Tools->Assign Default Material”
– The visual rendering quality of the new rendering output (scanline etc,mr..) will
be a below C2M 1.1, if a setting of Details=1 is used (point flickering). Use higher Detail settings to compensate this.
Practical, 1.5, 1.75,2,3,and 4 should do fine in most scenes.
– C2M 1.2 now requires the normals and tree structures for rendering. Thus you need to
save to .c2m or hit (Re)Analyse Scene before rendering.
– If you are having a 3dsmax scene, where you are using a lot of C2M Objects, you can optimize viewport performance, by resaving the .c2m files, see below under “global scene cluster optimization”.
– I think performance wise, the legacy DX9 viewport is still the best bet for most systems. Though it can strongly depend on the graphics card used and the DX10 Nitrous rendering mode used (see below).

1.2C (LATEST)
———————-

– tweaked: automatic colors and namings of markers. A point cloud object must be visible for the colors.
– File Rollout is back in the CReation Panel
– CORRECTED INFO: The object space normal maps, are designed so that they can be used with a “Normal Bump” Map of 3ds Max, using the defaults, except enabling “Local XYZ” as “Method”. For correct results you must further set the map amount of the Bump Map, to 100% (in the upper level material). And when loading the bitmap you must specify “Gamma Override” with a value of 1.0. Crazy that this is not the default for normal maps in 3dsmax…
Or alternativly you could also set the global gamma input value to 1.0 (not recommended).
– Slightly improved level of detail, speeding up viewport rendering
– Corrected average point size detection. Side effects: Eg rel. filter size for depends on this value. Must reanalyze scene.
– The viewport Level Of Detail system now works adaptivly, this can boost performance
for huge scenes with very strong varying point dentensity (like encountered in many scenes…). And it distributes details better where they belong which results in better visuals with the same number of actually rendered points. Though there can be some worst cases scenarios, where the point dentensity changes drastically within a cluster (a cluster is set of rougly 65000 points in box shape). So for example you might be able to visually recognize these clusters. In that case you can either increase the “Details” setting or the “PointSize” setting until you cannot recognize them anymore.
– NOTE: because of the changes to the LOD system (see above). Visual details are differently then before. So you might need to adjust the Details setting to your taste. For viewport the details should be <2. Larger settings only make sense for rendering with MR, Default Scanline etc. - The number of points (upper bound) that is currently rendered in the viewport, is now reported back to 3dsmax as vertices (overlay stats display key "7" ). You can use this number to get a quite good estimate on the number of quads that a shadow casting mesh would produce when rendering, if "ShadowDetail" would be set to the current "Detail Setting" and if the rendering resolution matches the viewport solution. Note that this number depends on output resolution, viewing position and orientation, fov, Aspect Ratio and the "Detail" setting. The real number though will be tyically below this as the final lod stage will be calculated on the GPU and is not available. - Added automatic auto degeneration: If 3dsmax Adaptive Degeneration settings is > 0 fps. Then the point cloud’s details will be automatically reduced to keep up to that frame rate (as long as any mouse button is down and you move the mouse).
This happens iterativly while you move and might take a few seconds until the desired framerate is reached. If the framerate (as 3dsmax reports it) fluctuates heavily you may notice some fluctuation of the presented detail while moving, as it will continously up/downscale the points to match the desired framerate. I would recommend setting the desired fps to at least 30, below that I have the feeling that the reported framerate by 3ds max is often very unreliable.
NOTE: if you have multiple point cloud objects I would recommend enabling the per object “Never Degenerate” checker button in the “Object Properties”, otherwise 3dsmax itself might decide to hide the whole object for degeneration.
– NOTE: The automatic auto degeneration together with the improved LOD system should allow to work smoothly with much larger point clouds then before: So as long as the points fit into memory/vram, you should be on the happy side. (I simulated this with a scene with 64 instances of a 27 million pointscloud object and could nicely work even with an outdated laptop). If the adaption takes too long, you can use the Rendering->Viewport Tools->Set Max Degeneration and Reset Degeneration, to give it a hint. It will then adjust further from no degeneration or maximum.
– Renamed “Skip-nth point” to “Import each Nth. point”, Which is what it actually does.
– “Save As” preselects current filename
– Made “Remove Big Outlier” consistent in interpreting the filter size with the other filters.
– Added Nitrious Viewport Support for 64bit Max2014 and Max2015 (DX9,DX10,DX11 and Software are supported) with same features as the old legacy DX9 viewport. Unfortunately the Nitrious API does not expose some critical features, which are required for optimum performance. Therefore for very large scenes (eg 1500 mill. points) the performance can become CPU limited.

To get the best performance there are now a lot of options to choose from
1. legacy DX9
2. Nitrious using DX9. (This is easy to carve out, it will be slower then legacy DX9 for high point numbers and never be faster in any situation).
3. Nitrious using DX10/11, offering five different render modes, see below

Generally speaking, I currently assume legacy DX9 will on average be the fastest method and will offer the highest number of points to render smoothly. Though the Nitrious DX10 mode can be the fastest for reasonable point numbers (especially Mode #2). Though this can vary for machines and drivers and also the scene itself and also the viewpoint…

NOTE: if you find a mode that works generally the best for your scenes and machines, let me know.

– Added “Nitrious Mode Button Toggle” in the Rendering Rollout. Lets you select between several rendering methods for point clouds. It displays the newly selected mode name in 3dsmax’s bottom status line, the new mode becomes active immidately. The mode has only an effect if Nitrious is used with DirectX10 or higher. The different modes vary in their performance on different video cards. The default mode #0, seems to be the best so far I tested. Except for Mode #2, which is mostly the fastest but this only renders points with one pixel in size. Further it seems, the VS modes seem to be faster on ATI, while the GS modes seem to be faster on NVIDIA, so far I tested.
– Improved viewport visuals: for the “VS” and “P+VS” rendering methods with Nitrious DX10+, points are now rendered as circles instead of quads.
– If multiple viewports are visible, then only the active viewport is refreshed while holding a mousebutton (moving the camera, moving objects etc).
– non-perspective-viewport: point size and level of detail is evaluated like in perspective views. Only Max2013 and upward.
– Projections now shoots against spheres (instead of orientated disks). Also a bugfix, that could cause a wrong distance calculation leading to wrongly dismissing hit points.
– Projection->Filter Size interpretation has been tuned. A value of 1 means to average within the distance to the next logic neighbor (connected vertex in mesh or pixel in map). 0.5 then means only half that distance and so on. So generally settings of 0.5 to 1.5 should do fine for all scenes.
– bugfix: projection to map with UVs crossing the 0 or 1 border, resulted in strange distortions.
– Projection: The Offset Bias and Max Distance settings are now proportional to the extends of the target mesh (instead of the point cloud’s extends)
– Cluster Batching Heuristics are used to remove CPU workload from the Nitrious Viewport and compensate for its lack of batching features. With this it seems that on average the DX10 Nitrious is now on the same performance level as the legacy DX9 viewport. And even faster. This hold true also for very huge scenes. Though there still can be some situations where the legacy DX9 outperforms. But I guess
– Improved useability of LicenseManger: Stating clearly that the account is stored online. Further offline activation is improved to avoid user mistakes. Also some more helpfull error messages for the typicall useage mistakes (like using .machine_id files on the wrong machine etc). Lastly Offline activation now can also be used for Static Mac address Licensing (network slave).
– Added cluster and Neigbore Statistics to Info box. Neighbor Mean is in local/object units (exactly as imported) SD is the standard deviation. A cluster is just simplly a set of points inside a box area, that is rendered at once by the GPU and strongly defines the efficiency of the level of detail system. You can see the clusters when they are loaded one by one.
– When saving to .c2m it now always does a global scene cluster optimization. In case you are interested the following tries to shed some light on what it does and how you could take benefit by using it specifally in certain situations. Though for the typical usage case (one c2m object with all the points in the scene) you never need to think about it.
This tries to adapt the number of clusters to the current scene. The goal is to have the smallest clusters without overloading the CPU. As the more clustes the more the CPU becomes the bottleneck. This optimization helps to balance CPU vs GPU workload and such improves viewport performance. The number of clusters solely depends on the total potentially visible points in the complete scene (not just the current object). So for example if you have multiple instances and/or multiple C2M objects, then all are summed up. The optimization assums that at least 50% of points are potentially visible in an average frame. We might later on introduce a tweakable parameter for this or alternativly an optimize-for-current-viewing-perspective, to further optimize performance in scenes where we have different requirements. I guess normally you will not need to use this features specifically. It can be very usefull for example, if you place a lot of C2M objects. Imagine placing multiple instanced copies of cars in the scene. Then it would be wise to save the car.c2m file while all those instances are in the scene. It will then greatly improve performance (especially under Nitrious as it is on the rather slowy size compared to native DirectX9). For completeness: another assumption is you are using at least some i3 with 2Ghz or more. And not an Intel Atom for example. In that case the result would not be optimal.
– Increased calculation of statistical analysis from 32 to 64bit. As I found out that for more points then roughly 20 Million, 32bit precision could lead to an overestimation of the Standard Deviation of factor of 2. This could possibly skew the results for the Point Removing Filters. Also this should help to improve efficiency of the level of detail system.

1.2B
———————-
– Spline Extraction in EditBox: Added CrossSection Spinner Button, that creates one Spline Object, divided into
the specified number of spline sections. The indivudal splines can be accessed by the sub objects “EditebleSpline->Spline”
– Bsp creation uses SAH.
NOTE: you should rebuild normals, if the scene was previously saved with
C2M 1.1
– Removed “Filtering “Rollout from Main Modifier. These features can now be found in the EditBox
– The filtering Methods now have a filter size parameter, which can be set either with an absolute value or with a relative value. The later works similar to the old presets buttons. The filter size then is the specified filter size multiplied by the average point size (=average distance between two closest points). When the absolute method is used the filter size is exactly that number entered.
– Added an “Adaptive” Mode for filtering. In the case of using a rel. filter size, using this, will use a filter size depending on the local dentensity of the point cloud. So the filter sizes increases where only a few big points are, and decreases where there are many small points. The actual filter size per point, then is simple the point’s size (distance to nearest neighbors) multiplied by the input factor. Generally I would recommend for most filters most of the time.
– 3dsMax 2015 support (DirectX9 renderer + Installer).
– New button in EditBox “Clone Points”. Copies all points (of all clouds2max objects in the scene) that are both visible and inside the EditBox to a new clouds2max object named “ClonedPoints”. Then it automatically selects this new “ClonedPoints” object.
NOTE: If you want to keep the cloned points, you have to SAVE to a .c2m file.
NOTE: Before saving or manually building normals, the visuals of the cloned points is the same as if they were just imported
– Added Tools Rollout
– Tools Rollout: Added “Attach List” button. This works similar to EditMesh->Attach List. Except of course it works on c2m objects. So it allows to select one or more c2m objects which will be attached to the current object (the points are copied to it). The newly attached objects will be deleted afterwards. So only the selected object remains afterwards, containg all the points.
NOTE: If you want to keep the points, you have to SAVE to a .c2m file.
– In 3dsMax 2015, you can also use the “Attach Button”, to import Recap Point Cloud Objects. In order to import all points, you should Enable “Fixed In Rendering” and set the detail level to 100% in the Recap Point Cloud Object.
– Newly created point cloud objects now always have blank filename. So they do not reuse the previously used file). Also removed the “File Rollout” in the Create Panel. Think its easier to use this way. So the file rollout is only visible in the Modifier panel.
– FIle Rollout: Simplified stats display
– Workaround to support vray (untested)
– Occlusion culling is now used as a preprocess when rendering.
This can strongly speed up rendering of large point clouds, and also reduces the memory usage. If shadows are turned off, the memory usage when rendering is now only dependent on the number of output pixels: for each pixel there is only one quad (two triangles) generated at most per point cloud object. For best performance it is better to have one point cloud object with all points, instead of splitting them across multiple objects. Though if multiple point cloud objects are used, it speeds up, if the point cloud objects are spatially seperated. So that they are not covering the same area in the view.
I only tested up to 30M points, which renders in 2 seconds (i7 quad, default scanline, 640×480,1pp). But theoritically 300M points for example should not slow down that much at all. So as long as you can inititally load the points into ram, it should be able to render it quite nicely. The Occlusion Culling process benefits strongly from multi core systems. The more cores you have the quicker this is. Further it depends directly on the image resolution, and only rather little on the number of points (similar to the new viewport). NOTE see also below for notes on the new shadow casting technique.
– “Global Details” is not used anymore by the renderer. The Details setting has been enhanced and is generally superior anyway. – Point Scaling Maximum Range increased
– Improved Details Points/Pixel Setting for values < 1. In this case automatically increases the minimum point size on screen to "fill" the holes. - The default material has been enhanced to use Vertex Alpha, which is required by the the Shadow and Supersampling Rendering Methods. Also two sided is now enabled by default. - Added "Assign Default Material Button". Reassigns the Default Material. - Corrected Gamma Value in the viewport to match rendering - Improved Rendering Quality: points are rendered as orientated disks. Thus point normals have a strong effect on visual appearance. Generally this way you can use larger point sizes without the scene looking to "blocky". - Snapping should work faster and will only snap to "foreground" points if there are overlapping points. - New Shadow Casting Techniqeue: You will find a Shadow Details. Shadows are only enabled if this setting is >0 and the per object shadow casting property is enabled. The value works similar to the Details setting but controls an extra hidden (transparent) mesh that is solely there to cast shadows. Typically the shadows only require a fraction of the details of the full colored point cloud. This can be achieved using this setting. So typically you will always want to have “Shadow Details” way below “Details”. NOTE: requires to assign the new default material, with the opacity map. Otherwise the shadow casting mesh is not hidden. VRAY: the shadow caster object can not currently work with vray. You might want to enable TwoSided Rendering in the pointcloud’s material, to get more shadows. The shadow casting mesh is essentially the old C2M way of rendering point clouds. Though without the occlusion culling, which makes no sense here. So generally all old performance/memory issue still apply here except: you can easily downscale the shadow mesh and C2M 1.2 renderers shadows 6 times faster then C2M 1.1 and requires 1/6 of the memory.
NOTE : As the shadow casting mesh is different to the shadow receiving mesh, you might need to tweak the shadow bias settings accordingly to avoid wrong self-shadowing.
– The Filtering features are now only applied to visible points inside the editbox (marked red). So you test the results quickly on a sub part of the points.
– All the smoothing filters now are using multi threading on all available cores to speed up.
– The Cancel Button when Rendering is now also checked in the preprocessing phase of C2M.
– Added new Filters, that can be used to better filter out noise: Median Color, Median Intensity and Median Point Size.
– Added new Filters that allow to estimate visually nicer Normals: Build Normals, Correct Orientation (normals) filter and Flip Normals. The first overwrites all normals and estimates normals using the given filter size. The second tries to achieve consistent normals, by flipping selective (existing) normals, evaluating in the given filter size. THe last one just flips all normals. NOTE: consistenly orientated normals are ONLY required, for using surface reconstrcuction. When rendering, they can be automatically aligned. Tested good settings: 1.5 to 4 relative+adaptive. Also the correct normals button can also be used multiple times consecutivly with different settings.
– Normals are now treated differently. When rendering (lighting etc), normals are automatically flipped acordingly. But in the viewports normals are not automatically flipped, so you can use the dot3 lighting or normals viewport display modes to visualize and inspect the normals. This is critical, as the surface reconstruction feature requires consistenly aligned normals, to work correctly. So the normals must correctly enclose a volume. YOu can use the new normal filters and the Flip Normal Buttons in Edit Box to adjust the normals.
– Removed Point Number Limit for the filtering methods
– Renamed “Rebuil Normals+Sizes Button” to “Reanalyze Scene”. Slightly changed behaviour. It only estimates rough normals, that can be used to render shadows etc. For less noisy normals you can now use the advanced Normal Builder in the EditBox.
What is does: rebuilding internal kdtree structures + updating of point sizes. If it a called the first time for a scene, a quick and rough normal estimation is also done.
– Added Poisson Surface Reconstruction Group to EditBox. The octree Depth value controls how fine the resulting mesh will be. And it also affects the calculation time. When Build Mesh is pressed, then the visible red points are exracted and a mesh is generated for these points. NOTES: the poisson recon method requires consistenly orientated normals. See the added normal filters. You can check consistency of normals with the c2m’s normals display in the viewport.
– EditBox features now are not applied to hidden point cloud objects (the hidden node property)
– EditBox: Added Projection Group. Offers several projections into either vertex or texture. Each time Create Projection i pressed the user can select a mesh. THis mesh will be first copied and then the copied mesh will receive the calculated projection. The projection requires the mesh and points to be aligned/positioned beforehand. If no corresponding point could be found it is marked red (eg vertex colors). You can tweak by using the provided parameters. Max Distance is the maximum allowed distance between a hit point and the source (specified relative to the point clouds extends). Offset Bias works like for example the 3dsmax Push Modifier, so the mesh is internally pushed (and pulled) to form a “Cage”. Essentially this works similar to the 3ds Max Projection Modifier. Finally Supersampling can be used to smoothout or find more points. (in conjunction with MaxDistance). Angular Spread is only used if SUperSampling>1. The newly created mesh is automatically selected after the result is available.
NOTES: If projection colors, it uses the colors as displayed in the viewport, so you can choose between all ramps and colors. In order to correctly work, requires the PointScaling setting, so that the point cloud appears to have no holes.
If using Draping and multisampling, care must be taken to only use very low values, Angular Spread, otherwise it will skew the results. The draping always projections along each vertex’s normal.
For best results, the Projection to Texture Maps modes, require the mesh to have non overlapping and non tiling UVs.
Though of course only Uvs in the range 0..1 are used.
UVChannel, Width a,d Height Parameter are only used when projecting to a texture map. UVChannel is the input UV mapping channel, Width and Height are the output resolution of the map. If projecting to a map, then no new mesh is created, instead only the output texture map is created, which can be used together with the input mesh (and its selected UVchannel).
The object space normal, are designed so that they can be used with a “Normal Bump” Map of 3ds Max, using the defaults, exceept enabling “Local XYZ” as “Method”. Seems like the map amount must be set 30% ??? The time to calculate the projection primarly depends on the output resolution (vertices or texture resolution) and the number of cores available.
– Added New helper Object: “Marker”. It works together with “Point Picking Alignment” buttons in the Tools Rollout. You can place them in the scene, typically enabling snapping beforehand. Just place all three markers for the two objects that should be aligned. Eg. Red A will be align to match Red B and so on. Then hit “Point Picking Alignment”.
– Added Point Picking Alignment to Tools Rollout. These work together with the Marker Objects, see above. Both buttons align the object A to match the orientation (and position and uniform scaling) of object B. The corresponding Markers for A and B must be setup beforehand. The first button “Align this to B” applies the new orientation to the currently selected point cloud object. The second button can be used to align any other object, by letting you choose a different object for A.