I'm building a model that includes a sedimentation basin (Pool) that has "inputs" of pumping, direct precip, and runoff. There is a tile underdrain system at the bottom of the basin that takes water from the basin to a "sump" from which it is pumped out of the basin (i.e., the underdrain and sump form a dewatering system). The sump (also a Pool) is about as tall as the main riser structure out of the basin. Water will be pumped from the sump into the main riser (by which it leaves the basin)
The dewatering flow rate is dependent on the lowest of the following "fluxes:" (i) seepage into the underdrain, (ii) pipe flow in the underdrain, or (iii) the capacity/rate of the pump in the dewatering sump. The first two (i and ii) are driven by the head difference between the basin and the sump. My problem is that when the basin starts to fill, the computed seepage and underdrain flow rate is so large that the sump quickly fills up, and the head becomes negative (higher water level in sump than in the surrounding basin). In reality this can't happen - the seepage flow and underdrain pipe flow would just stop when the head difference becomes zero. Clearly there's a flaw in my conceptual model that is causing this unrealistic "negative head" situation . I' must have failed to parameterize this relationship properly. Right now the head is a product of the volumes in the basin and sump, but the flows should also be a produce of head. I don't know if I should be calling a Previous Value, using a Material Delay, or if there is some other fix.
I'm a bit embarrassed to ask for help with this simple "Hydraulics 101" problem, but I've tried several fixes to no avail. Any thoughts/tips appreciated. Maybe it will be clear if I step away for a bit. Thanks in advance - Charles
Comments
4 comments