binaryActionWriter |
Function for writing actions of a HMDP model to binary files. The function defines sub-functions which can be used to define actions saved in a set of binary files. It is assumed that the states have been defined using 'binaryMDPWriter' and that the id of the states is known (can be retrieved using e.g. 'stateIdxDf'). |
binaryMDPWriter |
Function for writing an HMDP model to binary files. The function defines sub-functions which can be used to define an HMDP model saved in a set of binary files. |
convertBinary2HMP |
Convert a HMDP model stored in binary format to a 'hmp' (XML) file. The function simply parse the binary files and create 'hmp' files using the 'hmpMDPWriter()'. |
convertHMP2Binary |
Convert a HMDP model stored in a hmp (xml) file to binary file format. |
getBinInfoActions |
Info about the actions in the HMDP model under consideration. |
getBinInfoStates |
Info about the states in the binary files of the HMDP model under consideration. |
getHypergraph |
Return the (parts of) state-expanded hypergraph |
getInfo |
Information about the MDP |
getPolicy |
Get parts of the optimal policy. |
getRPO |
Calculate the retention pay-off (RPO) or opportunity cost for some states. |
getSteadyStatePr |
Calculate the steady state transition probabilities for the founder process (level 0). |
getWIdx |
Return the index of a weight in the model. Note that index always start from zero (C++ style), i.e. the first weight, the first state at a stage etc has index 0. |
hmpMDPWriter |
Function for writing an HMDP model to a hmp file (XML). The function define sub-functions which can be used to define an HMDP model stored in a hmp file. |
loadMDP |
Load the HMDP model defined in the binary files. The model are created in memory using the external C++ library. |
plot.HMDP |
Plot the state-expanded hypergraph of the MDP. |
plotHypergraph |
Plot parts of the state expanded hypergraph (experimental). |
randomHMDP |
Generate a "random" HMDP stored in a set of binary files. |
runCalcWeights |
Calculate weights based on current policy. Normally run after an optimal policy has been found. |
runPolicyIteAve |
Perform policy iteration (average reward criterion) on the MDP. |
runPolicyIteDiscount |
Perform policy iteration (discounted reward criterion) on the MDP. |
runValueIte |
Perform value iteration on the MDP. |
saveMDP |
Save the MDP to binary files |
setPolicy |
Modify the current policy by setting policy action of states. |