| dc.contributor.author | Fakhredanesh, M. | en_US | 
| dc.contributor.author | Roostaie, S. | en_US | 
| dc.date.accessioned | 1399-07-30T18:14:54Z | fa_IR | 
| dc.date.accessioned | 2020-10-21T18:14:55Z |  | 
| dc.date.available | 1399-07-30T18:14:54Z | fa_IR | 
| dc.date.available | 2020-10-21T18:14:55Z |  | 
| dc.date.issued | 2020-01-01 | en_US | 
| dc.date.issued | 1398-10-11 | fa_IR | 
| dc.date.submitted | 2019-03-07 | en_US | 
| dc.date.submitted | 1397-12-16 | fa_IR | 
| dc.identifier.citation | Fakhredanesh, M., Roostaie, S.. (2020). Action Change Detection in Video Based on HOG. Journal of Electrical and Computer Engineering Innovations (JECEI), 8(1), 135-144. doi: 10.22061/jecei.2020.6949.351 | en_US | 
| dc.identifier.issn | 2322-3952 |  | 
| dc.identifier.issn | 2345-3044 |  | 
| dc.identifier.uri | https://dx.doi.org/10.22061/jecei.2020.6949.351 |  | 
| dc.identifier.uri | http://jecei.sru.ac.ir/article_1445.html |  | 
| dc.identifier.uri | https://iranjournals.nlai.ir/handle/123456789/437520 |  | 
| dc.description.abstract | <strong>Background and Objectives</strong>: Action recognition, as the processes of labeling an unknown action of a query video, is a challenging problem, due to the event complexity, variations in imaging conditions, and intra- and inter-individual action-variability. A number of solutions proposed to solve action recognition problem. Many of these frameworks suppose that each video sequence includes only one action class. Therefore, we need to break down a video sequence into sub-sequences, each containing only a single action class.<br /> <strong>Methods: </strong>In this paper, we develop an unsupervised action change detection method to detect the time of actions change, without classifying the actions. In this method, a silhouette-based framework will be used for action representation. This representation uses xt patterns. The xt pattern is a selected frame of xty volume. This volume is achieved by rotating the traditional space-time volume and displacing its axes. In xty volume, each frame consists of two axes (x) and time (t), and y value specifies the frame number.<br /> <strong>Results: </strong>To test the performance of the proposed method, we created 105 artificial videos using the Weizmann dataset, as well as time-continuous camera-captured video. The experiments have been conducted on this dataset. The precision of the proposed method was 98.13% and the recall was 100%.<br /> <strong>Conclusion:</strong> The proposed unsupervised approach can detect action changes with a high precision. Therefore, it can be useful in combination with an action recognition method for designing an integrated action recognition system. | en_US | 
| dc.format.extent | 1510 |  | 
| dc.format.mimetype | application/pdf |  | 
| dc.language | English |  | 
| dc.language.iso | en_US |  | 
| dc.publisher | Shahid Rajaee Teacher Training University | en_US | 
| dc.relation.ispartof | Journal of Electrical and Computer Engineering Innovations (JECEI) | en_US | 
| dc.relation.isversionof | https://dx.doi.org/10.22061/jecei.2020.6949.351 |  | 
| dc.subject | Artificial Intelligence | en_US | 
| dc.subject | Computer vision | en_US | 
| dc.subject | Machine Learning | en_US | 
| dc.subject | Video surveillance | en_US | 
| dc.subject | Motion analysis | en_US | 
| dc.subject | Computer Vision | en_US | 
| dc.title | Action Change Detection in Video Based on HOG | en_US | 
| dc.type | Text | en_US | 
| dc.type | Original Research Paper | en_US | 
| dc.contributor.department | Faculty of Electrical and Computer, Malek Ashtar University of Technology, Tehran, Iran | en_US | 
| dc.contributor.department | Faculty of Electrical and Computer, Malek Ashtar University of Technology, Tehran, Iran | en_US | 
| dc.citation.volume | 8 |  | 
| dc.citation.issue | 1 |  | 
| dc.citation.spage | 135 |  | 
| dc.citation.epage | 144 |  |