Spaces are system-global, it would be a mess if one app could adjust it to its needs and thus influence the others. HoloLens tries to keep current space as updated as possible "no matter what". This process, unless there are some hacky ways, is unstoppable. What your app gets at a certain rate and certain resolution is the most recent snapshot of the spatial mesh, possibly processes/smoothened. However you modify it, it is just that - modifying the snapshot. You run your raycasts, visualize the mesh and do whatever else but you always do it against the most recent snapshot and not the actual space you can see in the settings.
Anchors, as far as I know, do not fully (if at all) rely on the spatial understanding but on tracking. Stabilizing the anchors also runs against the real-time space / point cloud. You cannot tell anchors to be anchored to your local spacial snapshot. The misunderstanding of what the anchors are based on comes from the fact that one (conventionally) first needs to find a raycast point on the spatial mesh and then attach an anchor. While the anchor itself is then not really based on that same spatial mesh.
You cannot truly substitute HoloLens's spatial understanding with an adjusted/pre-modeled/modified scan in relation to its fundamental processes. The feed is one way.