Node
Custom nodes are not fully supported and are prone to change any time soon. These explanations are prone to change frequently and are still not set in stone.
When writing a custom node you need to think of it as a process that gets A data, does X thing, and returns B data.
All the nodes inherit from InfiniteLandsNode. This base class provides the necessary methods for the node to be stored on disk, managed and used for the different nodes, and create connections between them. The only requirement that needs to be implemented is the dependency property, which ensures that all connections to the node are correctly set.
With that, there are certain interfaces that can be implemented, for example:
- IGive
which provides information on what type of data is being processed. Ex: Height Data or Biome Data - IOutput which ensures that the node is final, and the mesh settings are exactly the same as the final output
- IHavePreview which ensures that the node has a preview method available for the editor scene.
- ILoadAsset which provides the necessary methods for asset control, such as setting the asset, checking if the asset is available (in case there are more than one), process a specific asset or return all the assets that are set.
- IGeneratePoints used in case of a node modifying the position of the points used for sampling the other nodes, such as the Warp Filter
For convenience and in case of creating a process that modifies the height map, I recommend the use of HeightNodeBase. This abstract class manages all caching of the data necessary for an optimized generation and leaves the minimal methods for the implementation of nodes.
In case you want to create your own node, I recommend checking and duplicating some of the basic nodes, like Normalize or Only Pass Data. The general workflow is quite simple.
Preparing sub nodes
It’s important to ensure control over the resolution of the data that arrives. Some nodes might have an immense amount of data regenerated because of X reasons, and we wouldn’t want to recalculate all of it at every generation step. For that reason, it is important to prepare the current node and the upcoming.
The method PrepareSubNodes ensures that. If we look at a simple Normalize node, we can see how it just passes the data down the line. Since normalizing doesn’t need boundary data, or many inputs, there’s not much to do.
protected override int PrepareSubNodes(IndexManager manager, ref int currentCount,
int resolution, float ratio, string requestGuid)
{
return Input.PrepareNode(manager, ref currentCount, resolution, ratio, requestGuid);
}
However, looking at a more complex one like the filter BlurNode:
protected override int PrepareSubNodes(IndexManager manager, ref int currentCount,
int resolution, float ratio, string requestGuid)
{
float Size = ratio*BlurSize;
int increasedResolution = MapTools.IncreaseResolution(resolution, Mathf.CeilToInt(Size));
int maxResolution = HeightMap.PrepareNode(manager,ref currentCount, increasedResolution, ratio, requestGuid);
if(Mask != null)
maxResolution = Math.Max(Mask.PrepareNode(manager, ref currentCount, resolution,
ratio, requestGuid),maxResolution);
return maxResolution;
}
What’s interesting to see here is how we increase the resolution of previous maps in relation to the data needed. The Blur Node needs some extra data on the boundaries to ensure that we have enough information so that the blur stays consistent between terrains, so we increase the resolution size of the map (when increasing the resolution) and prepare the previous nodes with that data.
This ensures that only once the previous height maps are generated and only on the maximum size. There are other examples where it might be interesting to change the ratio, the branch of processes, or other details. Check the other nodes to find more examples, like the Warp Node or the Combine Node.
Processing the data
Once the nodes are prepared, we can proceed with the actual generation of the data. This is done in the Process Data method. This step usually goes as:
- Request all data from previous nodes
- Schedule Job of the process with the data of the previous node, setting the target data.
- Returning the job
As an example we can look at the NormalizeNode again:
public override JobHandle ProcessData(IndexAndResolution target, GenerationSettings settings)
{
HeightData previousJob = Input.RequestHeight(settings);
return RemapHeightJob.ScheduleParallel(settings.globalMap, previousJob.indexData,
target, minMaxValue, Input.minMaxValue,
settings.pointsLength, previousJob.jobHandle);
}
It’s important to set all the data into the indices that the target struct provides so that subsequent nodes can work on the correct data. Furthermore, it’s extremely important that all the previous data is read-only and is NEVER modified by a subsequent node. This would not only would create inconsistencies, but could provide errors, crashing Unity all together.
Component Processor
A processor is a component that, after the terrain and all masks are generated, provides some extra steps into the data. These are extracted from the main workflow to ensure compatibility and modularity. Some examples are those that can be found in Chunk Data Processors.
These kinds of processors can:
- Generate new derived data from preexisting masks or data generated by the graph.
- Provide alternative ways to create a mesh.
- Ensure compatibility with other assets by working as a port.
- Many other options. The general workflow to create your own Processor goes like this:
- Decide on what it processes, for example, Chunk Data, and implement the abstract class ChunkProcessor
(ChunkProcessor ). This will ensure that the component follows the correct lifecycle with the already-existing components, and provide some of the necessary methods to create your own processing. - Decide on what it gives, or the end result, for example MeshResult, and implement the interface IGenerate
(IGenerate ) - In case we are working with ChunkData, right after the data is received, it should add itself to the processor list via the AddProcessor. This ensures that the data will not be disposed when being used by other processors.
protected override void OnProcessDone(ChunkData chunk){
chunk.worldFinalData.AddProcessor(this);
ChunksToProcess.Add(chunk);
if(chunk.InstantGeneration)
UpdateRequests(true);
}
We should remove the component from there after the process is completely done, and the original data can be disposed and returned to the pool.
- Work with the data. - We can cache the data and store it into a list so that processes can be done asynchronously - We can work immediately and make any changes necessary - We can discard it and just return it to the pool
In this step, it is recommended to look at preexisting implementations to see how other components manage this workflow.
Shader
To create a custom shader, it is recommended to check the Minimal Shader and start from there. Otherwise, you can use any of the preexisting shaders to base your work on. All support has been transitioned to Shader Graph for compatibility reasons.