Semantic Information G Theory for Range Control with Tradeoff between Purposiveness and Efficiency
Autor: | Lu, Chenguang |
---|---|
Rok vydání: | 2024 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | Recent advances in deep learning suggest that we need to maximize and minimize two different kinds of information simultaneously. The Information Max-Min (IMM) method has been used in deep learning, reinforcement learning, and maximum entropy control. Shannon's information rate-distortion function is the theoretical basis of Minimizing Mutual Information (MMI) and data compression, but it is not enough to solve the IMM problem. The author has proposed the semantic information G theory (i.e., Shannon-Lu theory), including the semantic information G measure and the information rate fidelity function R(G) (R is the MMI for the given G of semantic mutual information). The parameter solution of the R(G) function provides a general method to improve the information efficiency, G/R. This paper briefly introduces the semantic information G measure and the parametric solution of the R(G) function. Two examples reveal that the parametric solution can help us optimize range control with the tradeoff between purposiveness (i.e., semantic mutual information) and information efficiency. It seems that the R(G) function can serve as the theoretical basis of IMM methods, but we still need further research in combination with deep learning, reinforcement learning, and constraint control. Comment: 9 pages and 6 Figures |
Databáze: | arXiv |
Externí odkaz: |