Portland State University PDXScholar Student Research Symposium Student Research Symposium 2014 May 7th, 11:00 AM - 1:00 PM Detecting Rule of Balance in Photography Uyen T. Mai Portland State University, [email protected] Feng Liu Portland State University, [email protected] Let us know how access to this document benefits you. Follow this and additional works at: http://pdxscholar.library.pdx.edu/studentsymposium Part of the Other Engineering Science and Materials Commons, and the Photography Commons Uyen T. Mai and Feng Liu, "Detecting Rule of Balance in Photography" (May 7, 2014). Student Research Symposium. Paper 8. http://pdxscholar.library.pdx.edu/studentsymposium/2014/Poster/8 This Event is brought to you for free and open access. It has been accepted for inclusion in Student Research Symposium by an authorized administrator of PDXScholar. For more information, please contact [email protected]. Detecting Rule of Balance in photography Uyen Mai, Feng Liu Computer Graphics and Vision Lab Portland State University ABSTRACT Rule of Balance is one of the most important composition rules in photography, METHODS which can be used as a standard for photo quality assessment. The rule of Feature Design 1. Saliency Centroids balance states that images with evenly distributed visual elements are visually οThe centroid point ππͺ of a region R in the image is defined as pleasing and thus are highly aesthetic. This work presents a method to ππͺ = automatically classify balanced and unbalanced images. Detecting the rule of balance requires a robust technique to locate and analyze important objects and visual elements, which involves understanding of the image content. Since semantic understanding is currently beyond the state of the art in computer vision, we employ the saliency maps as an alternative. We design a range of features according to the definition and effects of the rule of balance. Our πβπΉ πΊ π .π Region between two centroids π β πΉ πΊ(π) where πΊ π denotes the saliency value at pixel π ο We define the two features πΊπͺπ and πΊπͺπ simultaneously as follow |ππͺπ β ππͺπ | π (πͺ, π³) πΊπͺπ = πΊπͺπ = πΎ π― where π³ is the middle line. πͺπ , πͺπ are centroid points of the image half left and half right, and πͺ is the middle point of πͺπ πͺπ . experiments with a variety of machine learning techniques ([8-11]) and saliency analysis methods ([2-6]) demonstrate an encouraging performance in detecting vertical and horizontal balanced images. For future works, the balance detecting πΊπͺπ = π. ππ πΊπͺπ = π. ππ system can be developed into a subroutine for an automatic evaluation of professional photography. Saliency Map with centroids Minimal window π50 Saliency Noise Compensation by Thresholding ο Sort the value of the saliency map in increasing order ο Choose a low and a high percentile threshold ο Pixels below low threshold are set to 0; pixels above high threshold are set to high threshold. EXPERIMENTS AND RESULTS REFERENCES [1] B. P. Krages, The Art of Composition. Allworth Communications, Inc., 2005 [2] L. Itti and C. Koch, βComputational modeling of visual attention,β Nature Reviews Neuroscience, vol. 2, no. 3, pp.194β203, 2001 [3] J. Harel, C. Koch, and P. Perona, βGraph-based visual saliency,β in Proceedings of Neural Information Processing Systems, 2006. [4] X. Hou and L. Zhang, βSaliency detection: A spectral residual approach,βIEEE Conference on Computer Vision and Pattern Recognition, 2007. [5] R. Achantay, S. Hemamiz, F. Estraday, and S. Susstrunk, βFrequencytuned salient region detection,β IEEE International Conference on Computer Vision and Pattern Recognition, pp. 1597 β 1604, 2009. [6] M.-M. Cheng, G.-X. Zhang, N. J. Mitra, X. Huang, and S.-M. Hu, βGlobal contrast based salient region detection,β in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2011, pp. 409-416. [7] F. C. Crow, βSummed-area tables for texture mapping,β in Proceedings of ACM SIGGRAPH β84, 1984, pp. 207β212. [8] I. Jolliffe, Principal component analysis. Wiley Online Library, 2002. [9] C. Cortes and V. Vapnik, βSupport-vector networks,β Machine learning, vol. 20, no. 3, pp. 273β297, 1995. [10] Y. Freund and R. Schapire, βA desicion-theoretic generalization of online learning and an application to boosting,β in Computational learning theory. Springer, 1995, pp. 23β37. [11] C.-C. Chang and C.-J. Lin, βLibsvm: A library for support vector machines,β ACM Trans. Intell. Syst. Technol., vol. 2, pp. 27:1β27:27, May 2011. Dataset ο Collect images from Flickr Saliency Map with centroid points Balance image ο Manually label a set of 586 vertically balance, 428 horizontally balance images as a positive set, and a set of 2574 unbalance images as a negative set πΊπͺπ = π. π πΊπͺπ = π. π Saliency Map with centroid points Unbalance image The Rule of Visual Balance οOne of the most important rules in photography [1] 2. Total Difference οA balanced image has visual elements evenly distributed οDefined as the difference between the total saliency of the image οPleasing to the eyes and therefore highly aesthetic half left and half right Our goal is to design a system to automatically detect whether a photograph respects the rule of balance πΊ π β π β π°ππππ πΊ(π) π β π°πππππ where π°ππππ and π°πππππ denote the left and right halves of the image, respectively 3. Pixel-Wise Difference CONTRIBUTIONS οFilter the half left and half right of the saliency map for noise In this project, we develop a method for detecting the rule of balance from a reduction photo. οIdentify the minimal window ππΌ around the centroid such that ππΌ ο Design features according to the similarity between two halves of the image, the spatial distribution of visual elements, and the position of the centroid point ο Introduce a new method to reduce noise in the saliency map, which can improve detection accuracy ο Contribute to the computational understanding of photography, which can be used in automatic photo quality assessment and photo composition. οApply different Low-high threshold to the data set οExamine the effect of saliency threshold values for each feature οUse SVM ([11]) to build up classifiers οDo separately for Vertical and Horizontal balance Feature π«= OBJECTIVE Saliency Threshold Examination contains at least πΌ% of the total saliency οCompute the pixel-wise difference value as π·πΎπ« = πΊ π·π β πΊ(π·π ) π·π ,π·π β πΎπΆ π·π ,π·π ππππππππ Best Threshold W/o Thresholding Thresholding Saliency 40-85 87.4 89.1 Centroids Vertical Total 45-100 87.0 87.1 Balance Difference Pixel-Wise 40-85 87.5 88.3 Difference Saliency 40-85 67.3 69.1 Centroids Horizontal Total 45-95 68.3 68.5 Balance Difference Pixel-Wise 45-85 68.2 of each feature 69.9 Best Low-high threshold and Accuracy Comparison Difference Saliency Centroids Total Difference Pixel-Wise Difference All Logistic 88.7% 88.2% 89.2% kNN 86.9% 88.1% 85.3% SVM 89.1% 87.1% 88.3% AdaBoost 88.9% 86.1% 88.4% 93.7% 90.8% 92.4% 91.5% Classification Accuracy of Vertical Balance Saliency Centroids Total Difference Pixel-Wise Difference All Logistic 66.7% 64.2% kNN 66.7% 62.1% 66.9% 65.7% 70.7% 66.8% SVM 69.1% 68.5% 69.9% 71.2% AdaBoost 67.4% 62.1% 66.4% 68.5% Classification Accuracy of Horizontal Balance CONCLUSIONS οΆ In this research, we designed features according to the definition, implementation and effect of the rule, including the centroid point position, the two halves similarity, and the pixel wise comparison. οΆ We tested these features within a range of classic machine learning algorithms. οΆ Experiments show that our method, together with these features, achieve an encouraging result in detecting the rule of simplicity in photographs Balance Rule Detection οApply a range of classic machine learning algorithms to the detection of the rule of balance οTake 80% of the dataset to make training set, 20% remaining to make testing set ο Use 5-fold cross validation to evaluate the overall performance Contact information: Uyen Mai [email protected] Acknowledgement: this project is in part supported by URMP and NSF grants CNS-1205746 and IZS-1321119
© Copyright 2026 Paperzz