Journal of Statistical Physics (2024) 191:104 https://doi.org/10.1007/s10955-024-03315-7 Inferring Kinetics and Entropy Production from Observable Transitions in Partially Accessible, Periodically Driven Markov Networks Alexander M. Maier1 · Julius Degünther1 · Jann van der Meer1 · Udo Seifert1 Received: 15 April 2024 / Accepted: 24 July 2024 / Published online: 14 August 2024 © The Author(s) 2024 Abstract For a network of discrete states with a periodically driven Markovian dynamics, we develop an inference scheme for an external observer who has access to some transitions. Based on waiting-time distributions between these transitions, the periodic probabilities of states connected by these observed transitions and their time-dependent transition rates can be inferred. Moreover, the smallest number of hidden transitions between accessible ones and some of their transition rates can be extracted. We prove and conjecture lower bounds on the total entropy production for such periodic stationary states. Even though our techniques are based on generalizations of known methods for steady states, we obtain original results for those as well. Keywords Thermodynamic inference · Waiting-time distribution · Periodically driven Markov network · Entropy production rate 1 Introduction The framework of stochastic thermodynamics provides rules to describe small physical sys- tems that are embedded into a thermal reservoir but remain out of equilibrium due to external driving [1–3]. If the relevant degreees of freedom can be described by a memoryless, i.e., Markovian dynamics on a discrete set of states, the time-evolution of the system is governed by the network structure and the transition rates between the states. In the case of periodically Communicated by Keiji Saito. B Udo Seifert useifert@theo2.physik.uni-stuttgart.de Alexander M. Maier amaier@theo2.physik.uni-stuttgart.de Julius Degünther deguenther@theo2.physik.uni-stuttgart.de Jann van der Meer vdmeer@theo2.physik.uni-stuttgart.de 1 II. Institut für Theoretische Physik, Universität Stuttgart, Pfaffenwaldring 57, 70550 Stuttgart, Germany 123 http://crossmark.crossref.org/dialog/?doi=10.1007/s10955-024-03315-7&domain=pdf http://orcid.org/0009-0001-5665-2716 http://orcid.org/0000-0002-4868-6959 http://orcid.org/0000-0002-3619-6653 http://orcid.org/0000-0002-9271-6190 104 Page 2 of 17 A. M. Maier et al. driven transition rates, such a dynamics relaxes into a periodic stationary state (PSS) [4–7], which, as a special case, becomes a non-equilibrium steady state (NESS) [8–11] for constant transition rates. Since a model is fully specified only if all transition rates are known, practically relevant scenarios in which parts of the model remain hidden [12, 13] require methods to recover, e.g., hidden transition rates on the basis of observable data of a particular form. The combination of such methods with the physical constraints provided by the rules of stochastic thermo- dynamics comprises the field of thermodynamic inference [14]. With a focus on quantities that have a thermodynamic interpretation, recent works in the field obtain bounds on entropy production [15–20] or affinities [19, 21–23], which are complemented with techniques to recover topological information [24, 25] and speed limits [26–28]. Many of the methods discussed above apply to the case of time-independent driving and cannot straightforwardly be generalized to a PSS. For one of the standard methods of esti- mating entropy production, the thermodynamic uncertainty relation [29, 30], generalizations to PSSs exist [31–34], which require more input than their time-independent counterparts in general. For the purpose of estimating entropy production, the usual rationale if given information about residence in states is to identify appropriate transitions or currents, since such time- antisymmetric data allow one to infer the entropy production. When observing transitions, one can ask the converse question: Can we infer information about states, which are time- symmetric, from antisymmetric data like transitions? We will address in this work how observing transitions allows us to recover occupation probabilites in states if the system is in a PSS. In addition, wewill generalize and extendmethods from [19] to the periodically driven case to infer transition rates and the number of hidden transitions between two observable ones. We also formulate and compare different lower bounds on the mean total entropy production. These entropy estimators are either proved or supported with strong numerical evidence. The paper is structured as follows. In Sect. 2, we describe the setting and identify waiting- time distributions between observed transitions as the basic quantities we use to formulate our results. In Sect. 3, we investigate how these quantities can be used to infer kinetic information about the hidden part of a system in a PSS or NESS. Estimators for the mean entropy production are discussed in Sect. 4. We conclude and give an outlook on further work in Sect. 5. 2 General Setup We consider a network of N states i ∈ {1, . . . , N } that is periodically driven. The system is in state i(t) at time t and follows a stochastic description by allowing transitions between states sharing an edge in the graph. A transition from i to j happens instantaneously with rate ki j (t), which has the periodicity of the driving. To ensure thermodynamic consistency, we assume the local detailed balance condition [1–3] ki j (t) k ji (t) = eFi (t)−Fj (t)+ fi j (t) (1) at each link, i.e., for each transition and its reverse. The driving with period T may change the free energy Fk(t) of states k or act as a non-conservative force along transitions from i to j with fi j (t) = − f j i (t). Energies in this work are given in units of thermal energy so that entropy production is dimensionless. 123 Inferring Kinetics and Entropy Production from Observable Transitions… Page 3 of 17 104 Fig. 1 Graphs of partially observable, periodically drivenMarkov networks. Observable transitions are labeled and displayed in blue. All states and the remaining transitions are assumed to be hidden. Purple states and purple transitions form the boundary of the hidden network. In a only one pair of transitions can be observed. In b the two pairs 1 ↔ 2 and 4 ↔ 6 are observable. The whole network consists of the observable transitions, the boundary of the hidden network and its interior The dynamics of the probability pi (t) to occupy state i at time t obeys the master equation ∂t pi (t) = ∑ j [−pi (t)ki j (t) + p j (t)k ji (t) ] . (2) In the long-time limit t → ∞, these networks approach a periodic stationary state (PSS) ppssi (t). The transition rates and these probabilities ppssi (t) determine the mean entropy pro- duction rate in the PSS [1–3] 〈σ 〉pss ≡ 1 T ∫ T 0 ∑ i j ppssi (t)ki j (t) ln ppssi (t)ki j (t) ppssj (t)k ji (t) dt . (3) In this work, we assume that at least one pair of transitions of a Markov network in its PSS or NESS is observable for an external observer while other transitions and all states are hidden, i.e., not directly accessible for the observer. We illustrate this with graphs of two exemplary Markov networks in Fig. 1. States with an observable transition between them will be called boundary states. If two boundary states are connected with one hidden transition, these transitions and the boundary states form the boundary of the hidden network. Additionally, we assume the period T of the driving to be known. The task is to determine hidden quantities like the probabilities ppssi (t) of such partially accessible networks as well as to estimate the overall entropy production. In such a net- work, we can determine distributions of waiting times t between two successive observable transitions I = (i j) and J = (lm), whereas observing the full microscopic dynamics is impossible. These waiting-time distributions are of the form ψI→J (t |t0) ≡ ∑ γI→J (t,t0) P [ γI→J (t, t0)|I , t0 ] . (4) They depend on the time t0 ∈ [0, T ] at which transition I occurswithin one period of the PSS. Since an arbitrary number of hidden transitions occurs between I and J , the distributions are given by the sum of conditional path weights P [ γI→J (t, t0)|I , t0 ] corresponding to all microscopic trajectories γI→J (t, t0) that start directly after a transition I at t0 and end with the next observable transition J after waiting time t . Furthermore, we define �I→J (t) = ∫ T 0 ppss(t0|I )ψI→J (t |t0)dt0, (5) 123 104 Page 4 of 17 A. M. Maier et al. where we use the conditional probability ppss(t0|I ) to detect a particular transition I at a specific time t0 ∈ [0, T )within the period.Due to effectivelymarginalizing t0 like inEquation (5) when using trajectories with uncorrelated t0, e.g., observed trajectories for unknown T in whichwe discard a sufficient number of successivewaiting times between two saved ones, we can always get these waiting-time distributions from measured waiting times. In the special case of a NESS, ψI→J (t |t0) = �I→J (t) ≡ ψI→J (t) (6) holds for an arbitrarily assigned period T , which we emphasize by using ψI→J (t). 3 Shortest Hidden Paths, Transition Rates and Occupation Probabilities We first generalize methods to infer the number of hidden transitions in the shortest path between any two observable transitions fromaNESS [19, 24] to a PSS. For any two transitions I , J for which the waiting-time distribution does not vanish, the number of hidden transitions MI J along the shortest path between I and J is given by MI J = lim t→0 ( t d dt ln [ψI→J (t |t0)] ) = lim t→0 ( t d dt ln [�I→J (t)] ) , (7) which can be derived following an idea adopted in reference [19] for systems in a NESS. In order to sketch the general idea how in the short-time limit waiting-time distributions relate to the number of transition, we first consider a trajectory that starts in state i at time t0 and ends in a neighboring state j at time t0 + t . In the short time limit t → 0, the probability of such a trajectory fulfills limt→0 p( j, t0 + t |i, t0)/t = ki j (t0), which is the path weight of an infinitesimal short trajectory that contains only a single transition. Paths with multiple transitions contribute to higher-order terms in t and thus become irrelevant. In a second step, we use the same idea to compute the path weight of trajectories γI→J (t, t0) in the short-time limit. In expanded form, a concrete realization of γI→J (t, t0) that contributes to the sum in Equation (4) reads γI→J (t, t0) = (i0, t0+τ0) → (i1, t0+τ1) → · · · → (iL , t0+τL) → (iL+1, t0+τL+1), (8) where we assume that transition I ends in state i0 at time t0+τ0 and the concluding transition J connects states iL and iL+1 after duration τL+1 = t . With the explicit expression for the path weight [1] in a Markov network we calculate the probability of a particular sequence i0 → · · · → iL+1 as P(i0 → . . . → iL+1, t |I , t0) = ∫ t 0 dτ1 . . . ∫ t τL−1 dτLP [ γI→J (t, t0)|I , t0 ] = ∫ t 0 dτ1 . . . ∫ t τL−1 dτL L∏ l=0 e − ∫ τl+1 τl ∑ j kil j (t ′+t0)dt ′kil il+1(τl+1 + t0) = ( L∏ l=0 kil il+1(t0) )∫ t 0 dτL . . . ∫ τ3 0 dτ2 ∫ τ2 0 dτ1 + O(t L+1), (9) which is given as an integral over waiting times. From definition (4), we identify waiting- time distributions as sum of these probabilities over all possible sequences. By inserting 123 Inferring Kinetics and Entropy Production from Observable Transitions… Page 5 of 17 104 Equation (9) into the definition (4), we find lim t→0 ψI→J (t |t0) = ⎛ ⎝ MI J∏ l=0 kil il+1(t0) ⎞ ⎠ t MI J MI J ! ∼ t MI J , (10) where we assume that the shortest path of the form (8) has L = MI J hidden transitions. For systems in aPSS, thewaiting-time distributionsψI→J (t |t0) and�I→J (t) are interchangeable since both of their short-time limits are proportional to t MI J , i.e., are dominated by the shortest path between I and J . Since this shortest path between I and J consists of the same number of transitions, no matter at what time t0 a trajectory starts, the expression in the middle of Equation (7) is independent of t0. In the example of Fig. 1a, with I = + and J = +, we get M++ = 2 since the two transitions (23) and (31) form the corresponding shortest hidden path. For the graph shown in Fig. 1b with I = L− and J = R+, we find ML−R+ = 1 since (16) is the transition inbetween L− and R+. The rates of observable transitions can be recovered fromwaiting-time distributions.Given a transition J = (lm) and its reverse J̃ = (ml), we obtain the corresponding rate klm through lim t→0 ψ J̃→J (t |t0) = lim t→0 klm(t + t0)pl(t | J̃ , t0) = klm(t0), (11) where we use that the short-time limit is dominated by the path with the fewest transitions, which, in this case, is J̃ followed by J without any hidden intermediate transitions.Moreover, the state of the system is known immediately after the initial transition J̃ at time t0 within the period, which leads to pl(t = 0| J̃ , t0) = 1. Further transition rates are inferable for any combination of transitions I and J with MI J = 1, i.e., whenever the shortest path between I = (i j) and J = (lm) consists of only one hidden transition. For t → 0, its transition rate k jl(t0) then follows from a Taylor expansion as k jl(t0) = lim t→0 ψI→J (t |t0) klm(t0 + t)t (12) with kl j (t0) following analogously. As an example, we get the transition rates k14, k16, k41 and k61 for the network shown in Fig. 1b even though the related links are hidden. Occupation probabilities of boundary states of the hidden network can be inferred as follows. During a measurement of length MT with large M ∈ N, we count the number NI (t0 ≤ τ ≤ t0 + �t) of transitions I = (i j) that occur during the infinitesimal interval [t0, t0 + �t], where we map all times at which transitions happen into one period of the PSS using a modulo operation. We, therefore, obtain the rate of transitions I at time t0 ∈ [0, T ) within one period of the PSS as nI (t0) = lim M→∞ lim �t→0 NI (t0 ≤ τ ≤ t0 + �t) M�t = ppssi (t0)ki j (t0). (13) As the transition rate ki j (t0) can be determined as described above, we can thus infer ppssi (t0) from experimentally accessible data. Knowing the occupation probabilities of all bound- ary states of the hidden network allows us to calculate instantaneous currents along single transitions between them using the corresponding inferred transition rates. These results can be specialized to NESSs, where, to the best of our knowledge, they have not been reported yet either. In this special case, dropping the irrelevant t0 in Equations (11) and (12) leads to constant transition rates. Moreover, in a NESS, the mean rate of transitions I , 〈nI 〉ss, can directly be obtained from the total number NI ,T of observed transitions I along 123 104 Page 6 of 17 A. M. Maier et al. Fig. 2 Inference of occupation probabilities of boundary states. a Network with hidden states and hidden (black) transitions. Beginning with the light green transitions (12) and (21) in a, darker colored transitions are successively considered as observable. b The sums of occupation probabilities of states within the boundary of each hidden network in the PSS are shown in the respective color. The time-dependent transition rates are given in Appendix A.1. c As in b but for constant driving, i.e., for a NESS as a function of k45 with the other rates given in Appendix A.1 a measured trajectory of length T . Inferring occupation probabilities pssi then only requires dividing through the already calculated transition rate ki j , i.e., pssi ki j = 〈nI 〉ss = lim T→∞ NI ,T T . (14) Through equations (13) and (14) we show how to infer occupation probabilities of bound- ary states of the hidden network. Given the inferable quantities ppssi (t0) or pssi , we can calculate howmuch probability rests on states in the network beyond the so identified bound- ary states. As an example, Fig. 2b and c illustrate the probability to find the network shown in Fig. 2a in its boundary states rather than in states within the interior of the hidden network. In both figures, different sets of observable transitions lead to different boundaries of the hidden network. Figure2b displays sums of probabilities for these systems in a PSS, while Fig. 2c gives an example for NESSs. Each sum of inferable occupation probabilities quantifies the probability of finding the system in the boundary of the hidden network. The closer this sum is to one, the less relevant the inaccessible states in the interior of the hidden network are for the dynamics. 4 Three Estimators for Entropy Production in PSSs In this section, we estimate irreversibility via the entropy production rate in a PSS. We have seen above how waiting-time distributions contain information on the hidden dynamics of a network. Thus, it seems sensible to expect that these quantities can be used as entropy estimator to infer irreversibility in both the observable and the hidden parts of the network. For a trajectory of length T , reversing the driving protocol leads to transition rates k̃i j (t) = ki j (T−t). The correspondingwaiting-time distributions ψ̃ J̃→ Ĩ (t |t0+t) for reversed paths J̃ → Ĩ are the time-reversed versions of ψI→J (t |t0). Once waiting-time distributions of the form ψ̃ J̃→ Ĩ (t |t0 + t) have been determined, the fluctuation relation σ̂ψ ≡ lim T→∞ 1 T ln P [ ] P̃[ ̃] (15) 123 Inferring Kinetics and Entropy Production from Observable Transitions… Page 7 of 17 104 for a trajectory of length T and its time-reverse ̃ allows us to derive an estimator 〈 σ̂ψ 〉 pss that fulfills 〈σ 〉pss ≥ 〈 σ̂ψ 〉 pss = ∑ I ,J ∫ ∞ 0 ∫ T 0 nI (t0) T ψI→J (t |t0) ln ψI→J (t |t0) ψ̃ J̃→ Ĩ (t |t0 + t) dt0dt ≥ 0. (16) Here, the index ψ of the estimator highlights the type of waiting-time distribution that enters its expression in the above inequality. We prove this inequality in Appendix B as a general- ization of the trajectory-based entropy estimator 〈 σ̂ 〉 introduced in [19] for NESSs. For time-symmetric driving, estimating 〈σ 〉pss with 〈 σ̂ψ 〉 pss does not require to reverse the driving protocol in an experiment. In this case, ψ̃ J̃→ Ĩ (t |t0 + t) results from ψI→J (t |t0) by exploiting the symmetry ki j (t∗ + t0) = ki j (t∗ − t0) of the protocol for all transitions (i j) after finding t∗ ∈ [0, T ). In the next paragraphs, we discuss experimentally accessible entropy estimators that do not require waiting-time distributions of the time-reversed process regardless of symmetry properties of the driving. We have performed extensive numerical computations of random, periodically driven Markov networks corresponding to different underlying graphs to compute 〈 σ̂� 〉 pss ≡ ∑ I ,J 〈nI 〉pss ∫ ∞ 0 �I→J (t) ln �I→J (t) � J̃→ Ĩ (t) dt, (17) where the index � indicates the type of waiting-time distribution used. Here, 〈nI 〉pss is the mean of nI (t0) in one period t0 ∈ [0, T ] that results from measured data. For over 105 randomly chosen systems from unicyclic graphs of three states, diamond-shaped graphs as displayed in Fig. 1a and more complex underlying graphs, the inequalities 〈σ 〉pss ≥ 〈 σ̂� 〉 pss ≥ 0 (18) hold true as shown in the scatter plots in Fig. 3a, c and e. Therefore, we conjecture inequality (18) to hold true for periodically driven Markov networks, so that 〈 σ̂� 〉 pss is a thermodynam- ically consistent estimator of 〈σ 〉pss. Furthermore, transition rates and occupation probabilities that are inferred as described in Sect. 3 allow us to prove another lower bound on the entropy production rate of a Markov network in a PSS which complements the previous two. The estimator 〈 σ̂pk 〉 pss ≡ ∫ T 0 ∑ i j∈V ppssi (t)ki j (t) T ln ppssi (t)ki j (t) ppssj (t)k ji (t) dt (19) adds up the contributions to entropy production along transitions of the set V containing all transitions that are either observable or within the boundary of the hidden network. As〈 σ̂pk 〉 pss solely depends on inferable probabilities and rates, its index is pk. Since each of the terms in Equation (19) is non-negative for all t and part of 〈σ 〉pss as given in Equation (3), 〈 σ̂pk 〉 pss constitutes a lower bound on the total entropy production rate of the system. The bound is tight if the set V comprises all edges along which the current does not vanish identically. Put differently, 〈 σ̂pk 〉 pss = 〈σ 〉pss if and only if all edges with non-vanishing current are either observable or within the boundary of the hidden network. This bound may often be less tight than the conjectured bound (17) for periodically driven Markov networks though this ordering does not hold in general as shown in Fig. 3b, d and f. 123 104 Page 8 of 17 A. M. Maier et al. Fig. 3 Ratios 〈σ 〉pss / 〈 σ̂� 〉 pss and 〈 σ̂pk 〉 pss / 〈 σ̂� 〉 pss involving entropy estimators in scatter plots. a) Quality factor 〈 σ̂� 〉 pss / 〈σ 〉pss for two data sets of networks with diamond-shaped graph as shown in Fig. 1a. b Comparison between 〈 σ̂pk 〉 pss and 〈 σ̂� 〉 pss for the blue data set of a. c and dRatios like in a and b, respectively, for unicyclic three-state systems. e and f Both ratios like in a and b for networks with graph as shown in Fig. 1b The ratios in all scatter plots are plotted against the random angle ϕ0 that is part of the free energy parametrization as detailed in Appendix A.2 In the special case of a NESS, the last bound (19) acquires the familiar form 〈 σ̂pk 〉 pss NESS= ∑ i j∈V pssi ki j ln pssi ki j pssj k ji . (20) The crucial part is that here the entropy estimator 〈 σ̂pk 〉 pss is based on occupation probabilities and transition rates inferred fromdistributions ofwaiting timesbetweenobservable transitions as described in Sect. 3. Although the term (20) shows superficial similarities to themain result of reference [35] interpreted as an entropy estimator, our estimator 〈 σ̂pk 〉 pss differs in twoways. First, 〈 σ̂pk 〉 pss can be used for partially accessible Markov networks in a PSS, which include systems in a NESS as special cases. Second, the sum in Equations (19) and (20) includes contributions of both observable transitions and transitions within the boundary of the hidden 123 Inferring Kinetics and Entropy Production from Observable Transitions… Page 9 of 17 104 network. These additional contributions allow for a more accurate estimate of the entropy production rate. 5 Concluding Perspectives 5.1 Comparisons with Known Entropy Estimators In contrast to the steady-state case, only a fewmethods bound entropy production in partially accessibleMarkov networks that are driven periodically.One class of such estimators relies on measuring currents and their precision in a similar fashion as the thermodynamic uncertainty relation [29, 30]. A variant of the thermodynamic uncertainty relation proved in reference [36] makes use of the response of currents to a change in speed of the protocol that drives the system out of equilibrium. Since this method requires control over the driving speed, the prerequisites are more restrictive than in our case. Similar methods exist that estimate quantities that are related to entropy production in a broader picture. The result in reference [37] yields an estimate on the combined entropy production of a time-dependently driven process and its time-reverse, whereas reference [32] describes a method to estimate entropy production of a related auxiliary process. In addition to the class of methods above, which rely on the measurement of currents, we can identify a second class of methods in which the entropy estimator takes the form of a Kullback-Leiber divergence, which includes our proven bounds (16) and (19) but not the conjectured one (17). More specifically, two of these methods require access to the time- reversed process and use waiting-time distributions. The recent reference [38] yields a proven and a conjectured lower bound of the entropy production rate 〈σ 〉pss with the aim of utilizing waiting-time distributions that are independent of the time that is called t0 in the present work. Since the proven estimator of reference [38] uses distributions with marginalized t0, it is at best as tight as 〈 σ̂ψ 〉 pss, which makes use of the full distributions. A similar relation applies to the conjectured estimator in reference [38], which requires less data than 〈 σ̂ψ 〉 pss since it is independent of t0. However, obtaining this conjectured bound still requires measurements in the time-reversed system and is therefore more restrictive than our conjectured bound〈 σ̂� 〉 pss. In a more general setup, we could also allow for additional observations during the time interval between two visible transitions. The method reported in reference [39] allows us to find a stronger lower bound on 〈σ 〉pss by utilizing such additional, potentially even non-Markovian data. However, kinetic results like particular transition rates or, in particular, the entropy estimator (19) are inaccessible in this more general setup. Finally, the technique published for the case in which one is able to count transitions in time-dependent time series [40] can also be applied to partially accessible, periodically driven Markov networks. Within this setup, the counting yields empirical currents of visible transitions that are used to get a lower bound on 〈σ 〉pss using machine learning and multiple time series from repeated measurements. Hence, our bound 〈 σ̂pk 〉 pss is better than the lower bound established by the method of reference [40] because it not only contains the entropy production of all visible links but even that of some hidden transitions between visible ones. 123 104 Page 10 of 17 A. M. Maier et al. Fig. 4 Summary of the inference scheme. Starting from waiting-time distributions and the known period T of the driving, the number of hidden transitions MI J along the shortest path between two observable transitions I and J , the occupation probabilities of boundary states ppssi (t) as well as the rates ki j (t) of transitions between boundary states are inferable. These quantities enter the three lower bounds on entropy production 5.2 Summary and Outlook In this paper, we have introduced inference methods based on distributions of waiting times between consecutive observed transitions in partially accessible, periodically driven Markov networks. Successive use of these methods yields information about the kinetics of such a Markov network as well as its underlying topology, including hidden parts, as summarized in Fig. 4. We have first shown how to infer the number of hidden transitions along the shortest path between two observable transitions. We have then derived methods to infer transition rates between boundary states of the hidden network. Occupation probabilities of these boundary states then follow by discerning when the observable transitions happen within one period. Consequently, we find the total probability resting on the hidden states in the interior of the hidden network. In addition,we have presented three entropy estimators enabling us to estimate irreversibil- ity of a driven Markov network based on observed transitions during a partially accessible dynamics. The first and third one are proven to be lower bounds of the mean entropy pro- duction rate, whereas we conjecture the second estimator to have this property too. Its proof remains as open theoretical challenge. The second and third estimator have the advantage of not requiring control of the driving since its time reversal is not needed. Furthermore, we emphasize that even for the simpler NESS most of these results are original as well. Finally, it will be interesting to explore whether and how such an approach can be adapted to continuous systems described by a Langevin dynamics.We also hope that our non-invasive method yielding time-dependent transition rates and occupation probabilities will be applied to experimental data of periodically driven small systems. Funding Open Access funding enabled and organized by Projekt DEAL. Data availability Data generated during this study are available from the authors upon reasonable request. Declarations Conflict of interest The authors have no relevant financial or non-financial interests to disclose. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give 123 Inferring Kinetics and Entropy Production from Observable Transitions… Page 11 of 17 104 appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Appendix A: Parameters Used for Numerical Data A.1 Parameters for Data Shown in Fig. 2 For the network shown in Fig. 2a), the PSS is generated with transition rates ki j (t) = κi j e( Fi (t)−Fj (t)+ fi j )/2 and k ji (t) = κi j e −(Fi (t)−Fj (t)+ fi j )/2. (A1) Therein, we set all κi j = 1 as well as f12 = f23 = f34 = f41 = 2 and f45 = f56 = f67 = f73 = 20/3. Furthermore, we choose the free energies F1(t) = 1.0 + 0.3 cos 2t, F2(t) = 1.9 + 0.5 sin 2t, F3(t) = 1.4 + 0.4 sin 2t, F4(t) = 1.0 + 0.7 sin 2t, F5(t) = 3.1 + 0.1 sin 2t, F6(t) = 1.8 + 0.3 sin 2t, F7(t) = 2.5 (A2) and solve the master equation (2) of the network for the occupation probabilities ppssi (t). The NESSs for this network as shown in Fig. 2c are generated with the non-zero transition rates k12 = 1.7, k14 = 0.4, k21 = 0.6, k23 = 3.5, k32 = 0.3, k34 = 3.3, k37 = 0.02, k41 = 5.7, k43 = 0.3, k54 = 0.1, k56 = 0.7, k65 = 0.2, k67 = 0.8, k73 = 4.6, k76 = 0.05 and k45 ∈ [0.2, 75]. A.2 Parameters for Data Shown in Fig. 3 For Fig. 3a and b, we have used diamond-shaped networks as shown in Fig. 1a but with observable transitions 1 ↔ 2 and 1 ↔ 4. For Fig. 3c and d, we have used unicyclic three- state systems with observable transitions 1 ↔ 2. All transition rates are parameterized as in Equation (A1) with κi j = 1 unless otherwise specified. All diamond-shaped systems are characterized by f12 = f14 = f23 = f31 = f43 = 2. The free energies of the states in each simulated diamond network are given by F1(t) = Fc1 + Fa1 sinωt (A3) F2(t) = Fc2 + Fa2 sin(nω,2ωt + ϕ0,2) (A4) F3(t) = Fc3 + Fa3 sin(nω,3ωt + ϕ0,3) (A5) F4(t) = Fc4 + Fa4 sin(nω,4ωt + ϕ0,4), (A6) where constant energies Fci , energy amplitudes Fai and the angles ϕ0,i are randomly picked from normal distributions with mean and variance as given in Table 1. For j ∈ {1, . . . 4}, normally distributed r j ∼ N (0, 1) define ω = 25.27 + 5 |r1| or ω = 24.67 + 5 |r1| (A7) 123 http://creativecommons.org/licenses/by/4.0/ 104 Page 12 of 17 A. M. Maier et al. for the data set plotted in indigo and in blue, respectively, and for both data sets nω,i = �1 + 1.5 |ri |�. (A8) With the exception k13 = 1 and k31 = exp [−F1(t) + F3(t) + f31], the transition rates of the three-state networks used for Fig. 3c and d are given by Equation (A1) with κi j = 1 and f12 = f23 = f31 = 2. Moreover, the parameters in the free energies F1(t) = Fc1 + Fa1 sinωt (A9) F2(t) = Fc2 + Fa2 sin(nωωt + ϕ0) (A10) F3(t) = Fc3 (A11) are normally distributed with mean and variance as listed in Table 2. The normally distributed r j ∼ N (0, 1) for j = 1, 2 define ω = 3.815 + 5 |r1| and nω = �1 + 1.5 |r2|�. (A12) The graph in Fig. 1b with observable transitions 1 ↔ 2 and 4 ↔ 6 is the underlying graph for theMarkov networks simulated for Fig. 3e and f. The transition rates with parametrization (A1), wherein κi j = 1, determine the dynamics. We have set the non-conservative driving to f12 = f23 = f34 = f65 = f64 = 2.0, f41 = 1.8, f45 = 1.6 and f61 = 0.4 for all networks. For x ∈ {2, 3, . . . 6}, the time- dependent free energies are given by F1(t) = Fc1 + Fa1 sinωt (A13) Fx (t) = Fcx + Fax sin(nω,xωt + ϕ0,x ), (A14) where we have randomly picked the constant energies Fci , energy amplitudes Fai and the angles ϕ0,i from normal distributions with mean and variance as given in Table 3. Normally distributed r j ∼ N (0, 1) with j ∈ {1, . . . 6} define ω = 24.02 + 5 |r1| and ω = 29.25 + 5 |r1| (A15) Table 1 Meanandvariance of the normal distribution corresponding to the twoparameter sets, indigo (referring to the inset) and blue, defining all systems with diamond-shaped graph used for the scatter plot in Fig. 3 a and b Fc1 Fc2 Fc3 Fc4 Fa1 Fa2 Fa3 Fa4 ϕ0,2 ϕ0,3 ϕ0,4 Mean 1.58 3.05 1.84 1.64 −0.01 −0.04 0.05 0.11 4.39 4.79 −17.93 Variance 0.5 0.5 0.5 0.5 0.005 0.005 0.005 0.005 5 5 5 Mean 1.25 2.75 1.24 1.34 −0.01 −0.27 −0.02 −0.01 2.62 3.51 −13.29 variance 0.5 0.5 0.5 0.5 2.5 2.5 2.5 2.5 5 5 5 Table 2 Mean and variance of the normal distribution corresponding to the parameters defining the systems used for the scatter plot in Fig. 3 c and d Fc1 Fc2 Fc3 Fa1 Fa2 ϕ0 Mean 1.493 0.728 1.568 0.835 −0.349, −3.49 23.2 Variance 0.5 0.5 0.5 0.5 0.5 5 123 Inferring Kinetics and Entropy Production from Observable Transitions… Page 13 of 17 104 for the data set plotted in indigo and in blue, respectively. For both data sets, nω,i = �1 + 1.5 |ri |�. (A16) In all cases, we have computed the entropy production rate 〈σ 〉pss through Equation (3). For Fig. 3b, d and f, we have also calculated 〈 σ̂pk 〉 pss via Equation (19), to which only observable transitions, e.g., (12) and (21) for the unicycles, contribute. Integrating initial value problems of the absorbing network, where observed transitions are redirected into auxiliary states [19, 41], yields waiting-time distributions ψI→J (t |t0). Using the previously obtained probabilities and transition rates, the estimator 〈 σ̂� 〉 pss can be determined after integrating out the phase-like time on which all waiting-time distributions depend. Appendix B: Proof for the Entropy Estimator (16) The observed pairs of directed transitions yield coarse-grained trajectories (t) (I0, t00 = 0) �t1−→ (I1, t01 = t0 + �t1 mod T ) �t2−→ (I2, t02) . . . �tN−→ (IN , t0N ) (B17) of length T in time, where we choose the starting time of a period such that we observe the first transition at t00 = 0. A trajectory consists of tuples of observed transition Ii and the phase-like time t0i of its observation as well as of waiting times �ti inbetween. During �ti , an arbitrary number of hidden transitions can occur. Moreover, as we know T , we know the state of the network directly after each instantaneous transition. Thus, the next observable transition is independent of the past, i.e., the system has the Markov property at observable transitions. Therefore, the path weight of coarse-grained trajectories (t) factors into P [ (t)] = P(I0, 0)ψI0→I1(�t1|0)ψI1→I2(�t2|t01) . . . ψIN−1→IN (�tN |t0N−1). (B18) This factoring introduces waiting-time distributions of the form ψI→J (�t |t0) ≡ P [ I→J (�t, t0)|I , t0] = ∑ γI→J∈ I→J P [ γI→J (�t, t0)|I , t0 ] (B19) defined in Equation (4) in terms of conditional path weights of microscopic trajectories γI→J (�t, t0) that start right after an observed transition I and end with the next observed transition J . The time-reversed process results from reversing the protocol, the trajectory and the time. For a trajectory (t) of length T that starts at the phase-like time t0, the time-reversed transition rates read k̃i j (t) = ki j (T − t) while the time transforms as t̃ = T − t . Similarly, all quantities obtained by time-reversal will be marked with a tilde. The path weight of the time-reversed trajectory ̃ is, in analogy to Equation (B18), given by P̃ [ ̃(t) ] = P̃ ( ĨN+1, T ) ψ̃ ĨN→ ĨN−1 (�tN |t0N−1 + �tN ) . . . ψ̃ Ĩ1→ Ĩ0 (�t1|0 + �t1). (B20) Similar to reference [42], we estimate the entropy production rate 〈σ 〉pss using the log-sum inequality (see, e.g., [43]) as T 〈σ 〉pss = ∑ ζ P [ζ(t)] ln ⎛ ⎝ P [ζ(t)] P̃ [ ζ̃ (t) ] ⎞ ⎠ 123 104 Page 14 of 17 A. M. Maier et al. Table 3 Mean and variance of the normal distributions for both parameter sets, indigo and blue, used for the scatter plots in Fig. 3e and f Fc1 Fc2 Fc3 Fc4 Fc5 Fc6 Fa1 Fa2 Fa3 Fa4 Fa5 Fa6 Mean 1.38 1.53 1.72 0.48 1.88 1.53 −0.15 0.16 0.09 −0.04 0.17 0.03 Variance 0.5 0.5 0.5 0.5 0.5 0.5 0.1 0.1 0.1 0.1 0.1 0.1 Mean 1.50 0.81 0.98 2.00 −0.26 0.78 −0.55 0.14 0.21 −0.34 0.05 −0.56 Variance 0.5 0.5 0.5 0.5 0.5 0.5 0.6 0.6 0.6 0.6 0.6 0.6 ϕ0,2 ϕ0,3 ϕ0,4 ϕ0,5 ϕ0,6 Mean 0.71 0.10 1.46 0.31 0.73 Variance 5 5 5 5 5 Mean 2.30 1.91 4.40 5.66 4.05 Variance 5 5 5 5 5 = ∑ ζ, P [ (t)|ζ(t)]P [ζ(t)] ln ⎛ ⎝ P [ (t)|ζ(t)]P [ζ(t)] P̃ [ ̃(t) ∣∣∣ζ̃ (t) ] P̃ [ ζ̃ (t) ] ⎞ ⎠ ≥ ∑ ζ, P [ (t)|ζ(t)]P [ζ(t)] ln ⎛ ⎝ ∑ ζ P [ (t)|ζ(t)]P [ζ(t)] ∑ ζ̃ P̃ [ ̃(t) ∣∣∣ζ̃ (t) ] P̃ [ ζ̃ (t) ] ⎞ ⎠ ≡ T 〈 σ̂ψ 〉 pss . (B21) Here, P [ (t)|ζ(t)] = 1 = P̃ [ ̃(t) ∣∣∣ζ̃ (t) ] holds if (t) is the correct coarse-grained tra- jectory onto which ζ(t) is mapped under coarse-graining. Otherwise, these conditional path weights vanish. Replacing the sums of conditional path weights in the logarithm of the second line of inequality (B21) with waiting-time distributions as in Equations (B18) and (B20) yields ln ( P [ (t)] P̃ [ ̃(t) ] ) = ln ⎛ ⎝ P ( I0, t00 ) P̃ ( ĨN , T ) ⎞ ⎠ ︸ ︷︷ ︸ ≡δ(T ,t00 ) + N∑ j=1 ln ⎛ ⎝ ψI j−1→I j (�t j |t0 j−1) ψ̃ Ĩ j→ Ĩ j−1 (�t j |t0 j = t0 j−1 + �t j ) ⎞ ⎠ . (B22) The first term on the right hand side, δ(T , t00), is periodic when varying one of the fixed times t00 and T . Hence ∣∣δ(T , t00) ∣∣ ≤ c holds for a constant c ∈ R +. To reformulate the sum on the right hand side of Equation (B22), we define the conditional counter νJ |I (t, t0) ≡ 1 T N∑ j=1 δ(t − �t j )δ(t0 − t0 j−1)δI ,I j−1δJ ,I j . (B23) It sums all terms of trajectories that start with I at t0 and end with the succeeding observable transition J after waiting time t . Substituting the conditional counter into Equation (B22) 123 Inferring Kinetics and Entropy Production from Observable Transitions… Page 15 of 17 104 leads to ln ( P [ (t)] P̃ [ ̃(t) ] ) = δ(T , t00) + T ∫ ∞ 0 ∫ T 0 ∑ I ,J νJ |I (t, t0) ln ( ψI→J (t |t0) ψ̃ J̃→ Ĩ (t |t + t0) ) dt0dt . (B24) With limT→∞ ∣∣δ(T , t00) ∣∣ /T = 0, the calculation of the expectation value of Equation (B24) reduces to determining the expectation value of the conditional counter. Following Ref. [19], we argue that νJ |I (t, t0)�t = No. of transitions (I J ) per T after I at t0 and waiting time t ∈ [0,�t] T = No. of transitions I per T at t0 T P(J after waiting time t ∈ [0,�t]|I , t0) (B25) holds true. Together with nI (t0)/T = 〈nI 〉pss ppss(t0|I ), this results in 〈 νJ |I (t, t0) 〉 = ∑ νJ |I (t, t0)P [ (t)] = 〈No. of transitions I at t0〉 /T T P(J after waiting time t ∈ [0,�t]|I , t0) = nI (t0) T ψI→J (t |t0) = 〈nI 〉pss ppss(t0|I )ψI→J (t |t0). (B26) In total, the estimator 〈 σ̂ψ 〉 pss of the mean entropy production rate is given by 〈 σ̂ψ 〉 pss = 〈 lim T→∞ 1 T ln ( P [ (t)] P̃ [ ̃(t) ] )〉 = ∫ ∞ 0 ∫ T 0 ∑ I ,J 〈 νJ |I (t, t0) 〉 ln ( ψI→J (t |t0) ψ̃ J̃→ Ĩ (t |t + t0) ) dt0dt = ∑ I ,J ∫ ∞ 0 ∫ T 0 nI (t0) T ψI→J (t |t0) ln ( ψI→J (t |t0) ψ̃ J̃→ Ĩ (t |t + t0) ) dt0dt, (B27) where the second equality follows from the vanishing δ(T , t00)/T in the limit T → ∞. The estimator is non-negative as its definition (B21) has the form of a Kullback-Leibler divergence. In the special case of a NESS, rewriting 〈 σ̂ψ 〉 pss using Equation (B26) reveals that this estimator reduces to 〈 σ̂� 〉 pss, which we define in Equation (17), since then ppss(t0|I ) = 1/T and the waiting-time distributions do not depend on t0. References 1. Seifert, U.: Stochastic thermodynamics, fluctuation theorems, and molecular machines. Rep. Prog. Phys. 75, 126001 (2012). https://doi.org/10.1088/0034-4885/75/12/126001 2. Peliti, L., Pigolotti, S.: Stochastic Thermodynamics. An Introduction. Princeton Univ. Press, Princeton and Oxford (2021) 3. Shiraishi,N.:An Introduction to Stochastic Thermodynamics. Fundamental Theories of Physics. Springer, Singapore (2023). https://doi.org/10.1007/978-981-19-8186-9 123 https://doi.org/10.1088/0034-4885/75/12/126001 https://doi.org/10.1007/978-981-19-8186-9 104 Page 16 of 17 A. M. Maier et al. 4. Rahav, S., Horowitz, J., Jarzynski, C.: Directed flow in nonadiabatic stochastic pumps. Phys. Rev. Lett. 101, 140602 (2008). https://doi.org/10.1103/PhysRevLett.101.140602 5. Chernyak, V.Y., Sinitsyn, N.A.: Pumping restriction theorem for stochastic networks. Phys. Rev. Lett. 101, 160601 (2008). https://doi.org/10.1103/PhysRevLett.101.160601 6. Raz, O., Subasi, Y., Jarzynski, C.: Mimicking nonequilibrium steady states with time-periodic driving. Phys. Rev. X 6, 021022 (2016). https://doi.org/10.1103/PhysRevX.6.021022 7. Rotskoff, G.M.: Mapping current fluctuations of stochastic pumps to nonequilibrium steady states. Phys. Rev. E 95, 030101 (2017). https://doi.org/10.1103/PhysRevE.95.030101 8. Stigler, J., Ziegler, F., Gieseke, A., Gebhardt, J.C.M., Rief, M.: The complex folding network of single calmodulin molecules. Science 334(6055), 512–516 (2011). https://doi.org/10.1126/science.1207598 9. Roldan, E., Parrondo, J.M.R.: Estimating dissipation from single stationary trajectories. Phys. Rev. Lett. 105, 150607 (2010). https://doi.org/10.1103/PhysRevLett.105.150607 10. Muy, S., Kundu,A., Lacoste,D.:Non-invasive estimation of dissipation fromnon-equilibriumfluctuations in chemical reactions. The Journal of Chemical Physics 139(12), 124109 (2013). https://doi.org/10.1063/ 1.4821760 11. Ge, H., Qian, M., Qian, H.: Stochastic theory of nonequilibrium steady states. Part II: Applications in chemical biophysics. Physics Reports 510(3), 87–118 (2012). https://doi.org/10.1016/j.physrep.2011.09. 001 12. Esposito, M.: Stochastic thermodynamics under coarse-graining. Phys. Rev. E 85, 041125 (2012). https:// doi.org/10.1103/PhysRevE.85.041125 13. Ariga, T., Tomishige, M., Mizuno, D.: Nonequilibrium energetics of molecular motor kinesin. Phys. Rev. Lett. 121, 218101 (2018). https://doi.org/10.1103/PhysRevLett.121.218101 14. Seifert, U.: From stochastic thermodynamics to thermodynamic inference. Ann. Rev. Cond. Mat. Phys. 10(1), 171–192 (2019). https://doi.org/10.1146/annurev-conmatphys-031218-013554 15. Martínez, I.A., Bisker, G., Horowitz, J.M., Parrondo, J.M.R.: Inferring broken detailed balance in the absence of observable currents. Nature Communications 10, 3542 (2019). https://doi.org/10.1038/ s41467-019-11051-w 16. Dechant, A., Sasa, S.I.: Improving thermodynamic bounds using correlations. Phys. Rev. X 11, 041061 (2021). https://doi.org/10.1103/PhysRevX.11.041061 17. Skinner, D.J., Dunkel, J.: Improved bounds on entropy production in living systems. Proc. Natl. Acad. Sci. USA (2021). https://doi.org/10.1073/pnas.2024300118 18. Harunari, P.E., Dutta, A., Polettini, M., Roldan, E.: What to learn from a few visible transitions’ statistics? Phys. Rev. X 12, 041026 (2022). https://doi.org/10.1103/PhysRevX.12.041026 19. van derMeer, J., Ertel, B., Seifert, U.: Thermodynamic inference in partially accessible markov networks: A unifying perspective from transition-based waiting time distributions. Phys. Rev. X 12, 031025 (2022). https://doi.org/10.1103/PhysRevX.12.031025 20. van der Meer, J., Degünther, J., Seifert, U.: Time-resolved statistics of snippets as general framework for model-free entropy estimators. Phys.Rev.Lett.130, 257101 (2023). https://doi.org/10.1103/PhysRevLett. 130.257101 21. Ohga, N., Ito, S., Kolchinsky, A.: Thermodynamic bound on the asymmetry of cross-correlations. Phys. Rev. Lett. 131, 077101 (2023). https://doi.org/10.1103/PhysRevLett.131.077101 22. Liang, S., Pigolotti, S.: Thermodynamic bounds on time-reversal asymmetry. Phys. Rev. E 108, 062101 (2023). https://doi.org/10.1103/PhysRevE.108.L062101 23. Degünther, J., van der Meer, J., Seifert, U.: Fluctuating entropy production on the coarse-grained level: Inference and localization of irreversibility. Phys. Rev. Research 6, 023175 (2024). https://doi.org/10. 1103/PhysRevResearch.6.023175 24. Li, X., Kolomeisky, A.B.: Mechanisms and topology determination of complex chemical and biological network systems from first-passage theoretical approach. The Journal of Chemical Physics 139(14), 144106 (2013). https://doi.org/10.1063/1.4824392 25. Van Vu, T., Saito, K.: Topological speed limit. Phys. Rev. Lett. 130, 010402 (2023). https://doi.org/10. 1103/PhysRevLett.130.010402 26. Ito, S., Dechant, A.: Stochastic time evolution, information geometry, and the Cramér-Rao bound. Phys. Rev. X 10, 021056 (2020). https://doi.org/10.1103/PhysRevX.10.021056 27. Shiraishi, N., Funo, K., Saito, K.: Speed limit for classical stochastic processes. Phys. Rev. Lett. 121, 070601 (2018). https://doi.org/10.1103/PhysRevLett.121.070601 28. Dechant, A., Garnier-Brun, J., Sasa, S.-I.: Thermodynamic bounds on correlation times. Phys. Rev. Lett. 131, 167101 (2023). https://doi.org/10.1103/PhysRevLett.131.167101 29. Barato, A.C., Seifert, U.: Thermodynamic uncertainty relation for biomolecular processes. Phys. Rev. Lett. 114, 158101 (2015). https://doi.org/10.1103/PhysRevLett.114.158101 123 https://doi.org/10.1103/PhysRevLett.101.140602 https://doi.org/10.1103/PhysRevLett.101.160601 https://doi.org/10.1103/PhysRevX.6.021022 https://doi.org/10.1103/PhysRevE.95.030101 https://doi.org/10.1126/science.1207598 https://doi.org/10.1103/PhysRevLett.105.150607 https://doi.org/10.1063/1.4821760 https://doi.org/10.1063/1.4821760 https://doi.org/10.1016/j.physrep.2011.09.001 https://doi.org/10.1016/j.physrep.2011.09.001 https://doi.org/10.1103/PhysRevE.85.041125 https://doi.org/10.1103/PhysRevE.85.041125 https://doi.org/10.1103/PhysRevLett.121.218101 https://doi.org/10.1146/annurev-conmatphys-031218-013554 https://doi.org/10.1038/s41467-019-11051-w https://doi.org/10.1038/s41467-019-11051-w https://doi.org/10.1103/PhysRevX.11.041061 https://doi.org/10.1073/pnas.2024300118 https://doi.org/10.1103/PhysRevX.12.041026 https://doi.org/10.1103/PhysRevX.12.031025 https://doi.org/10.1103/PhysRevLett.130.257101 https://doi.org/10.1103/PhysRevLett.130.257101 https://doi.org/10.1103/PhysRevLett.131.077101 https://doi.org/10.1103/PhysRevE.108.L062101 https://doi.org/10.1103/PhysRevResearch.6.023175 https://doi.org/10.1103/PhysRevResearch.6.023175 https://doi.org/10.1063/1.4824392 https://doi.org/10.1103/PhysRevLett.130.010402 https://doi.org/10.1103/PhysRevLett.130.010402 https://doi.org/10.1103/PhysRevX.10.021056 https://doi.org/10.1103/PhysRevLett.121.070601 https://doi.org/10.1103/PhysRevLett.131.167101 https://doi.org/10.1103/PhysRevLett.114.158101 Inferring Kinetics and Entropy Production from Observable Transitions… Page 17 of 17 104 30. Gingrich, T.R., Horowitz, J.M., Perunov, N., England, J.L.: Dissipation bounds all steady-state current fluctuations. Phys. Rev. Lett. 116, 120601 (2016). https://doi.org/10.1103/PhysRevLett.116.120601 31. Proesmans, K., van den Broeck, C.: Discrete-time thermodynamic uncertainty relation. EPL 119(2), 20001 (2017). https://doi.org/10.1209/0295-5075/119/20001 32. Barato, A.C., Chetrite, R., Faggionato, A., Gabrielli, D.: Bounds on current fluctuations in periodically driven systems. New J. Phys. 20, 103023 (2018). https://doi.org/10.1088/1367-2630/aae512 33. Koyuk, T., Seifert, U.: Operationally accessible bounds on fluctuations and entropy production in peri- odically driven systems. Phys. Rev. Lett. 122(23), 230601 (2019). https://doi.org/10.1103/PhysRevLett. 122.230601 34. Barato, A.C., Chetrite, R., Faggionato, A., Gabrielli, D.: A unifying picture of generalized thermodynamic uncertainty relations*. Journal of Statistical Mechanics: Theory and Experiment 2019(8), 084017 (2019). https://doi.org/10.1088/1742-5468/ab3457 35. Shiraishi, N., Sagawa, T.: Fluctuation theorem for partially masked nonequilibrium dynamics. Phys. Rev. E 91, 012130 (2015). https://doi.org/10.1103/PhysRevE.91.012130 36. Koyuk, T., Seifert, U.: Thermodynamic uncertainty relation for time-dependent driving. Phys. Rev. Lett. 125, 260604 (2020). https://doi.org/10.1103/PhysRevLett.125.260604 37. Proesmans, K., Horowitz, J.M.: Hysteretic thermodynamic uncertainty relation for systems with broken time-reversal symmetry. J. Stat. Mech.: Theor. Exp. 2019(5), 054005 (2019) 38. Harunari, P.E., Fiore, C.E., Barato, A.C.: Inference of entropy production for periodically driven systems (2024) arXiv:2406.12792 [cond-mat.stat-mech] 39. Degünther, J., van der Meer, J., Seifert, U.: Unraveling the where and when of coarse-grained entropy production: General theory meets single-molecule experiments (2024) arXiv:2405.18316 [cond-mat.stat- mech] Proc. Natl. Acad. Sci. USA, in press 40. Otsubo, S., Manikandan, S.K., Sagawa, T., Krishnamurthy, S.: Estimating time-dependent entropy pro- duction from non-equilibrium trajectories. Communications Physics 5(1), 11 (2022). https://doi.org/10. 1038/s42005-021-00787-x 41. Sekimoto, K.: Derivation of the first passage time distribution for markovian process on discrete network (2022) arXiv:2110.02216 [cond-mat.stat-mech] 42. Gomez-Marin, A., Parrondo, J.M.R., Van den Broeck, C.: Lower bounds on dissipation upon coarse- graining. Phys. Rev. E 78, 011107 (2008). https://doi.org/10.1103/PhysRevE.78.011107 43. Cover, T.M., Thomas, J.A.: Elements of Information Theory. Telecommunications and signal processing. Wiley, Hoboken (2006) Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. 123 https://doi.org/10.1103/PhysRevLett.116.120601 https://doi.org/10.1209/0295-5075/119/20001 https://doi.org/10.1088/1367-2630/aae512 https://doi.org/10.1103/PhysRevLett.122.230601 https://doi.org/10.1103/PhysRevLett.122.230601 https://doi.org/10.1088/1742-5468/ab3457 https://doi.org/10.1103/PhysRevE.91.012130 https://doi.org/10.1103/PhysRevLett.125.260604 http://arxiv.org/abs/2406.12792 http://arxiv.org/abs/2405.18316 https://doi.org/10.1038/s42005-021-00787-x https://doi.org/10.1038/s42005-021-00787-x http://arxiv.org/abs/2110.02216 https://doi.org/10.1103/PhysRevE.78.011107 Inferring Kinetics and Entropy Production from Observable Transitions in Partially Accessible, Periodically Driven Markov Networks Abstract 1 Introduction 2 General Setup 3 Shortest Hidden Paths, Transition Rates and Occupation Probabilities 4 Three Estimators for Entropy Production in PSSs 5 Concluding Perspectives 5.1 Comparisons with Known Entropy Estimators 5.2 Summary and Outlook Appendix A: Parameters Used for Numerical Data A.1 Parameters for Data Shown in Fig.2 A.2 Parameters for Data Shown in Fig.3 Appendix B: Proof for the Entropy Estimator (16) References