<?xml version="1.0" encoding="UTF-8" ?>
<oai_dc:dc schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
<dc:title>3D detection of people</dc:title>
<dc:creator>Bernal Rodríguez, Antonio Patricio</dc:creator>
<dc:contributor>Roca Nieto, Lucas</dc:contributor>
<dc:contributor>Expresión Gráfica</dc:contributor>
<dc:subject>3D</dc:subject>
<dc:subject>Three dimensional space</dc:subject>
<dc:subject>Camera</dc:subject>
<dc:subject>Multisensor integration</dc:subject>
<dc:subject>Espacio tridimensional</dc:subject>
<dc:subject>Cámara</dc:subject>
<dc:subject>Multisensor de integración</dc:subject>
<dc:subject>Expresión Gráfica en Ingeniería</dc:subject>
<dc:description>The aim of this work is to obtain the three-dimensional location of people in a scene using the information provided by several cameras; these cameras must have a range of view in common. The place chosen to record the videos for this work is the Nexus Lab in the Universität Stuttgärt. In this work, it is possible to distinguish two main tasks: The first task is to obtain a model of the camera which let us turn the twodimensional information from the images taken from the cameras, in three-dimensional information in a known reference-frame; this topic is studied in chapter 2. The second task is to obtain relevant information from the images of each camera, that is, the location of the people, which appear in an image scene, must be known to compute their three-dimensional location. This information will be obtained by reviewing different types of methods. This topic is described in chapter 3. Finally, the three-dimensional location of the people in the room has been accomplished by integrating the two-dimensional information obtained from the images of each camera with the model of the cameras and knowing their locations in the room. In order to quantify the measurement error of the system, several video tests have been recorded. The error analysis is accomplished in chapter 4.</dc:description>
<dc:description>Escuela Técnica Superior de Ingeniería Industrial</dc:description>
<dc:date>2009-04-21</dc:date>
<dc:type>info:eu-repo/semantics/bachelorThesis</dc:type>
<dc:identifier>http://hdl.handle.net/10317/1075</dc:identifier>
<dc:language>eng</dc:language>
<dc:rights>Atribución-NoComercial-SinDerivadas 3.0 España</dc:rights>
<dc:rights>http://creativecommons.org/licenses/by-nc-nd/3.0/es/</dc:rights>
<dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
<dc:format>application/pdf</dc:format>
<dc:publisher>Universidad Politécnica de Cartagena</dc:publisher>
</oai_dc:dc>
<?xml version="1.0" encoding="UTF-8" ?>
<d:DIDL schemaLocation="urn:mpeg:mpeg21:2002:02-DIDL-NS http://standards.iso.org/ittf/PubliclyAvailableStandards/MPEG-21_schema_files/did/didl.xsd">
<d:DIDLInfo>
<dcterms:created schemaLocation="http://purl.org/dc/terms/ http://dublincore.org/schemas/xmls/qdc/dcterms.xsd">2009-07-09T07:47:10Z</dcterms:created>
</d:DIDLInfo>
<d:Item id="hdl_10317_1075">
<d:Descriptor>
<d:Statement mimeType="application/xml; charset=utf-8">
<dii:Identifier schemaLocation="urn:mpeg:mpeg21:2002:01-DII-NS http://standards.iso.org/ittf/PubliclyAvailableStandards/MPEG-21_schema_files/dii/dii.xsd">urn:hdl:10317/1075</dii:Identifier>
</d:Statement>
</d:Descriptor>
<d:Descriptor>
<d:Statement mimeType="application/xml; charset=utf-8">
<oai_dc:dc schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
<dc:title>3D detection of people</dc:title>
<dc:creator>Bernal Rodríguez, Antonio Patricio</dc:creator>
<dc:contributor>Roca Nieto, Lucas</dc:contributor>
<dc:contributor>Expresión Gráfica</dc:contributor>
<dc:subject>3D</dc:subject>
<dc:subject>Three dimensional space</dc:subject>
<dc:subject>Camera</dc:subject>
<dc:subject>Multisensor integration</dc:subject>
<dc:subject>Espacio tridimensional</dc:subject>
<dc:subject>Cámara</dc:subject>
<dc:subject>Multisensor de integración</dc:subject>
<dc:description>The aim of this work is to obtain the three-dimensional location of people in a scene using the information provided by several cameras; these cameras must have a range of view in common. The place chosen to record the videos for this work is the Nexus Lab in the Universität Stuttgärt. In this work, it is possible to distinguish two main tasks: The first task is to obtain a model of the camera which let us turn the twodimensional information from the images taken from the cameras, in three-dimensional information in a known reference-frame; this topic is studied in chapter 2. The second task is to obtain relevant information from the images of each camera, that is, the location of the people, which appear in an image scene, must be known to compute their three-dimensional location. This information will be obtained by reviewing different types of methods. This topic is described in chapter 3. Finally, the three-dimensional location of the people in the room has been accomplished by integrating the two-dimensional information obtained from the images of each camera with the model of the cameras and knowing their locations in the room. In order to quantify the measurement error of the system, several video tests have been recorded. The error analysis is accomplished in chapter 4.</dc:description>
<dc:date>2009-07-09T07:47:10Z</dc:date>
<dc:date>2009-07-09T07:47:10Z</dc:date>
<dc:date>2009-04-21</dc:date>
<dc:type>info:eu-repo/semantics/bachelorThesis</dc:type>
<dc:identifier>http://hdl.handle.net/10317/1075</dc:identifier>
<dc:language>eng</dc:language>
<dc:rights>http://creativecommons.org/licenses/by-nc-nd/3.0/es/</dc:rights>
<dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
<dc:rights>Atribución-NoComercial-SinDerivadas 3.0 España</dc:rights>
<dc:publisher>Universidad Politécnica de Cartagena</dc:publisher>
</oai_dc:dc>
</d:Statement>
</d:Descriptor>
<d:Component id="10317_1075_1">
</d:Component>
</d:Item>
</d:DIDL>
<?xml version="1.0" encoding="UTF-8" ?>
<dim:dim schemaLocation="http://www.dspace.org/xmlns/dspace/dim http://www.dspace.org/schema/dim.xsd">
<dim:field authority="ae9647aa-2cde-4ff1-ba33-3789803005fc" confidence="500" element="contributor" mdschema="dc" qualifier="author">Bernal Rodríguez, Antonio Patricio</dim:field>
<dim:field authority="315" confidence="600" element="contributor" mdschema="dc" qualifier="advisor">Roca Nieto, Lucas</dim:field>
<dim:field element="contributor" lang="es" mdschema="dc" qualifier="department">Expresión Gráfica</dim:field>
<dim:field element="date" mdschema="dc" qualifier="accessioned">2009-07-09T07:47:10Z</dim:field>
<dim:field element="date" mdschema="dc" qualifier="available">2009-07-09T07:47:10Z</dim:field>
<dim:field element="date" mdschema="dc" qualifier="issued">2009-04-21</dim:field>
<dim:field element="description" lang="es" mdschema="dc" qualifier="abstract">The aim of this work is to obtain the three-dimensional location of people in a scene using the information provided by several cameras; these cameras must have a range of view in common. The place chosen to record the videos for this work is the Nexus Lab in the Universität Stuttgärt. In this work, it is possible to distinguish two main tasks: The first task is to obtain a model of the camera which let us turn the twodimensional information from the images taken from the cameras, in three-dimensional information in a known reference-frame; this topic is studied in chapter 2. The second task is to obtain relevant information from the images of each camera, that is, the location of the people, which appear in an image scene, must be known to compute their three-dimensional location. This information will be obtained by reviewing different types of methods. This topic is described in chapter 3. Finally, the three-dimensional location of the people in the room has been accomplished by integrating the two-dimensional information obtained from the images of each camera with the model of the cameras and knowing their locations in the room. In order to quantify the measurement error of the system, several video tests have been recorded. The error analysis is accomplished in chapter 4.</dim:field>
<dim:field element="description" lang="es" mdschema="dc" qualifier="centro">Escuela Técnica Superior de Ingeniería Industrial</dim:field>
<dim:field element="format" lang="en" mdschema="dc">application/pdf</dim:field>
<dim:field element="language" lang="es" mdschema="dc" qualifier="iso">eng</dim:field>
<dim:field element="publisher" lang="es" mdschema="dc">Universidad Politécnica de Cartagena</dim:field>
<dim:field element="rights" lang="*" mdschema="dc">Atribución-NoComercial-SinDerivadas 3.0 España</dim:field>
<dim:field element="rights" lang="*" mdschema="dc" qualifier="uri">http://creativecommons.org/licenses/by-nc-nd/3.0/es/</dim:field>
<dim:field element="rights" lang="es" mdschema="dc" qualifier="accessRights">info:eu-repo/semantics/openAccess</dim:field>
<dim:field element="title" lang="es" mdschema="dc">3D detection of people</dim:field>
<dim:field element="type" lang="es" mdschema="dc">info:eu-repo/semantics/bachelorThesis</dim:field>
<dim:field element="subject" lang="es_ES" mdschema="dc" qualifier="other">Expresión Gráfica en Ingeniería</dim:field>
<dim:field element="subject" lang="es" mdschema="dc">3D</dim:field>
<dim:field element="subject" lang="es" mdschema="dc">Three dimensional space</dim:field>
<dim:field element="subject" lang="es" mdschema="dc">Camera</dim:field>
<dim:field element="subject" lang="es" mdschema="dc">Multisensor integration</dim:field>
<dim:field element="subject" mdschema="dc">Espacio tridimensional</dim:field>
<dim:field element="subject" mdschema="dc">Cámara</dim:field>
<dim:field element="subject" mdschema="dc">Multisensor de integración</dim:field>
<dim:field element="identifier" mdschema="dc" qualifier="uri">http://hdl.handle.net/10317/1075</dim:field>
</dim:dim>
<?xml version="1.0" encoding="UTF-8" ?>
<europeana:record schemaLocation="http://www.europeana.eu/schemas/ese/ http://www.europeana.eu/schemas/ese/ESE-V3.4.xsd">
<dc:title>3D detection of people</dc:title>
<dc:creator>Bernal Rodríguez, Antonio Patricio</dc:creator>
<dc:contributor>Roca Nieto, Lucas</dc:contributor>
<dc:contributor>Expresión Gráfica</dc:contributor>
<dc:subject>3D</dc:subject>
<dc:subject>Three dimensional space</dc:subject>
<dc:subject>Camera</dc:subject>
<dc:subject>Multisensor integration</dc:subject>
<dc:subject>Espacio tridimensional</dc:subject>
<dc:subject>Cámara</dc:subject>
<dc:subject>Multisensor de integración</dc:subject>
<dc:subject>Expresión Gráfica en Ingeniería</dc:subject>
<dc:description>The aim of this work is to obtain the three-dimensional location of people in a scene using the information provided by several cameras; these cameras must have a range of view in common. The place chosen to record the videos for this work is the Nexus Lab in the Universität Stuttgärt. In this work, it is possible to distinguish two main tasks: The first task is to obtain a model of the camera which let us turn the twodimensional information from the images taken from the cameras, in three-dimensional information in a known reference-frame; this topic is studied in chapter 2. The second task is to obtain relevant information from the images of each camera, that is, the location of the people, which appear in an image scene, must be known to compute their three-dimensional location. This information will be obtained by reviewing different types of methods. This topic is described in chapter 3. Finally, the three-dimensional location of the people in the room has been accomplished by integrating the two-dimensional information obtained from the images of each camera with the model of the cameras and knowing their locations in the room. In order to quantify the measurement error of the system, several video tests have been recorded. The error analysis is accomplished in chapter 4.</dc:description>
<dc:description>Escuela Técnica Superior de Ingeniería Industrial</dc:description>
<dc:date>2009-07-09T07:47:10Z</dc:date>
<dc:date>2009-07-09T07:47:10Z</dc:date>
<dc:date>2009-04-21</dc:date>
<dc:type>info:eu-repo/semantics/bachelorThesis</dc:type>
<dc:identifier>http://hdl.handle.net/10317/1075</dc:identifier>
<dc:language>eng</dc:language>
<dc:rights>Atribución-NoComercial-SinDerivadas 3.0 España</dc:rights>
<dc:rights>http://creativecommons.org/licenses/by-nc-nd/3.0/es/</dc:rights>
<dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
<dc:format>application/pdf</dc:format>
<dc:publisher>Universidad Politécnica de Cartagena</dc:publisher>
<europeana:provider>Hispana</europeana:provider>
<europeana:type>TEXT</europeana:type>
<europeana:rights>http://creativecommons.org/licenses/by-nc-nd/3.0/es/</europeana:rights>
<europeana:dataProvider>Universidad Politécnica de Cartagena</europeana:dataProvider>
<europeana:isShownAt>http://hdl.handle.net/10317/1075</europeana:isShownAt>
</europeana:record>
<?xml version="1.0" encoding="UTF-8" ?>
<thesis schemaLocation="http://www.ndltd.org/standards/metadata/etdms/1.0/ http://www.ndltd.org/standards/metadata/etdms/1.0/etdms.xsd">
<title>3D detection of people</title>
<creator>Bernal Rodríguez, Antonio Patricio</creator>
<contributor>Roca Nieto, Lucas</contributor>
<contributor>Expresión Gráfica</contributor>
<subject>3D</subject>
<subject>Three dimensional space</subject>
<subject>Camera</subject>
<subject>Multisensor integration</subject>
<subject>Espacio tridimensional</subject>
<subject>Cámara</subject>
<subject>Multisensor de integración</subject>
<description>The aim of this work is to obtain the three-dimensional location of people in a scene using the information provided by several cameras; these cameras must have a range of view in common. The place chosen to record the videos for this work is the Nexus Lab in the Universität Stuttgärt. In this work, it is possible to distinguish two main tasks: The first task is to obtain a model of the camera which let us turn the twodimensional information from the images taken from the cameras, in three-dimensional information in a known reference-frame; this topic is studied in chapter 2. The second task is to obtain relevant information from the images of each camera, that is, the location of the people, which appear in an image scene, must be known to compute their three-dimensional location. This information will be obtained by reviewing different types of methods. This topic is described in chapter 3. Finally, the three-dimensional location of the people in the room has been accomplished by integrating the two-dimensional information obtained from the images of each camera with the model of the cameras and knowing their locations in the room. In order to quantify the measurement error of the system, several video tests have been recorded. The error analysis is accomplished in chapter 4.</description>
<date>2009-07-09</date>
<date>2009-07-09</date>
<date>2009-04-21</date>
<type>info:eu-repo/semantics/bachelorThesis</type>
<identifier>http://hdl.handle.net/10317/1075</identifier>
<language>eng</language>
<rights>http://creativecommons.org/licenses/by-nc-nd/3.0/es/</rights>
<rights>info:eu-repo/semantics/openAccess</rights>
<rights>Atribución-NoComercial-SinDerivadas 3.0 España</rights>
<publisher>Universidad Politécnica de Cartagena</publisher>
</thesis>
<?xml version="1.0" encoding="UTF-8" ?>
<record schemaLocation="http://www.loc.gov/MARC21/slim http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd">
<leader>00925njm 22002777a 4500</leader>
<datafield ind1=" " ind2=" " tag="042">
<subfield code="a">dc</subfield>
</datafield>
<datafield ind1=" " ind2=" " tag="720">
<subfield code="a">Bernal Rodríguez, Antonio Patricio</subfield>
<subfield code="e">author</subfield>
</datafield>
<datafield ind1=" " ind2=" " tag="260">
<subfield code="c">2009-04-21</subfield>
</datafield>
<datafield ind1=" " ind2=" " tag="520">
<subfield code="a">The aim of this work is to obtain the three-dimensional location of people in a scene using the information provided by several cameras; these cameras must have a range of view in common. The place chosen to record the videos for this work is the Nexus Lab in the Universität Stuttgärt. In this work, it is possible to distinguish two main tasks: The first task is to obtain a model of the camera which let us turn the twodimensional information from the images taken from the cameras, in three-dimensional information in a known reference-frame; this topic is studied in chapter 2. The second task is to obtain relevant information from the images of each camera, that is, the location of the people, which appear in an image scene, must be known to compute their three-dimensional location. This information will be obtained by reviewing different types of methods. This topic is described in chapter 3. Finally, the three-dimensional location of the people in the room has been accomplished by integrating the two-dimensional information obtained from the images of each camera with the model of the cameras and knowing their locations in the room. In order to quantify the measurement error of the system, several video tests have been recorded. The error analysis is accomplished in chapter 4.</subfield>
</datafield>
<datafield ind1="8" ind2=" " tag="024">
<subfield code="a">http://hdl.handle.net/10317/1075</subfield>
</datafield>
<datafield ind1=" " ind2=" " tag="653">
<subfield code="a">3D</subfield>
</datafield>
<datafield ind1=" " ind2=" " tag="653">
<subfield code="a">Three dimensional space</subfield>
</datafield>
<datafield ind1=" " ind2=" " tag="653">
<subfield code="a">Camera</subfield>
</datafield>
<datafield ind1=" " ind2=" " tag="653">
<subfield code="a">Multisensor integration</subfield>
</datafield>
<datafield ind1=" " ind2=" " tag="653">
<subfield code="a">Espacio tridimensional</subfield>
</datafield>
<datafield ind1=" " ind2=" " tag="653">
<subfield code="a">Cámara</subfield>
</datafield>
<datafield ind1=" " ind2=" " tag="653">
<subfield code="a">Multisensor de integración</subfield>
</datafield>
<datafield ind1="0" ind2="0" tag="245">
<subfield code="a">3D detection of people</subfield>
</datafield>
</record>
<?xml version="1.0" encoding="UTF-8" ?>
<mets ID=" DSpace_ITEM_10317-1075" OBJID=" hdl:10317/1075" PROFILE="DSpace METS SIP Profile 1.0" TYPE="DSpace ITEM" schemaLocation="http://www.loc.gov/METS/ http://www.loc.gov/standards/mets/mets.xsd">
<metsHdr CREATEDATE="2022-11-15T04:24:01Z">
<agent ROLE="CUSTODIAN" TYPE="ORGANIZATION">
<name>REPOSITORIO DIGITAL UPCT</name>
</agent>
</metsHdr>
<dmdSec ID="DMD_10317_1075">
<mdWrap MDTYPE="MODS">
<xmlData schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-1.xsd">
<mods:mods schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-1.xsd">
<mods:name>
<mods:role>
<mods:roleTerm type="text">author</mods:roleTerm>
</mods:role>
<mods:namePart>Bernal Rodríguez, Antonio Patricio</mods:namePart>
</mods:name>
<mods:name>
<mods:role>
<mods:roleTerm type="text">advisor</mods:roleTerm>
</mods:role>
<mods:namePart>Roca Nieto, Lucas</mods:namePart>
</mods:name>
<mods:name>
<mods:role>
<mods:roleTerm type="text">department</mods:roleTerm>
</mods:role>
<mods:namePart>Expresión Gráfica</mods:namePart>
</mods:name>
<mods:extension>
<mods:dateAccessioned encoding="iso8601">2009-07-09T07:47:10Z</mods:dateAccessioned>
</mods:extension>
<mods:extension>
<mods:dateAvailable encoding="iso8601">2009-07-09T07:47:10Z</mods:dateAvailable>
</mods:extension>
<mods:originInfo>
<mods:dateIssued encoding="iso8601">2009-04-21</mods:dateIssued>
</mods:originInfo>
<mods:identifier type="uri">http://hdl.handle.net/10317/1075</mods:identifier>
<mods:abstract>The aim of this work is to obtain the three-dimensional location of people in a scene using the information provided by several cameras; these cameras must have a range of view in common. The place chosen to record the videos for this work is the Nexus Lab in the Universität Stuttgärt. In this work, it is possible to distinguish two main tasks: The first task is to obtain a model of the camera which let us turn the twodimensional information from the images taken from the cameras, in three-dimensional information in a known reference-frame; this topic is studied in chapter 2. The second task is to obtain relevant information from the images of each camera, that is, the location of the people, which appear in an image scene, must be known to compute their three-dimensional location. This information will be obtained by reviewing different types of methods. This topic is described in chapter 3. Finally, the three-dimensional location of the people in the room has been accomplished by integrating the two-dimensional information obtained from the images of each camera with the model of the cameras and knowing their locations in the room. In order to quantify the measurement error of the system, several video tests have been recorded. The error analysis is accomplished in chapter 4.</mods:abstract>
<mods:language>
<mods:languageTerm authority="rfc3066">eng</mods:languageTerm>
</mods:language>
<mods:accessCondition type="useAndReproduction">Atribución-NoComercial-SinDerivadas 3.0 España</mods:accessCondition>
<mods:subject>
<mods:topic>3D</mods:topic>
</mods:subject>
<mods:subject>
<mods:topic>Three dimensional space</mods:topic>
</mods:subject>
<mods:subject>
<mods:topic>Camera</mods:topic>
</mods:subject>
<mods:subject>
<mods:topic>Multisensor integration</mods:topic>
</mods:subject>
<mods:subject>
<mods:topic>Espacio tridimensional</mods:topic>
</mods:subject>
<mods:subject>
<mods:topic>Cámara</mods:topic>
</mods:subject>
<mods:subject>
<mods:topic>Multisensor de integración</mods:topic>
</mods:subject>
<mods:titleInfo>
<mods:title>3D detection of people</mods:title>
</mods:titleInfo>
<mods:genre>info:eu-repo/semantics/bachelorThesis</mods:genre>
</mods:mods>
</xmlData>
</mdWrap>
</dmdSec>
<amdSec ID="TMD_10317_1075">
<rightsMD ID="RIG_10317_1075">
<mdWrap MDTYPE="OTHER" MIMETYPE="text/plain" OTHERMDTYPE="DSpaceDepositLicense">
<binData>TGljZW5zZSBncmFudGVkIGJ5IEFSQUNFTEkgVcORQSBTQUxDRURBIChhcmFjZWxpLnNhbGNlZGFAYmliLnVwY3QuZXMpIG9uIDIwMDktMDctMDdUMTA6NTM6MTBaIChHTVQpOgoKPHRhYmxlIHdpZHRoPSI1NzciIHN0eWxlPSJib3JkZXItYm90dG9tOnRoaW47Ym9yZGVyLXRvcDp0aGluO2JvcmRlci1sZWZ0OnRoaW47Ym9yZGVyLXJpZ2h0OnRoaW4iPgogIDx0cj4KICAgIDx0ZCBzdHlsZT0idGV4dC1hbGlnbjpqdXN0aWZ5Ij4KCTxzdHJvbmc+TGljZW5jaWEgZGUgRGlzdHJpYnVjaSZvYWN1dGU7biBObyBFeGNsdXNpdmE8L3N0cm9uZz4KCiAgICA8cD5QYXJhIHF1ZSBlbCBSZXBvc2l0b3JpbyBkZSBsYSBVbml2ZXJzaWRhZCBQb2xpdCZlYWN1dGU7Y25pY2EgZGUgQ2FydGFnZW5hCnB1ZWRhIHJlcHJvZHVjaXIgeSBjb211bmljYXIgcCZ1YWN1dGU7YmxpY2FtZW50ZSBzdSBkb2N1bWVudG8gZXMgbmVjZXNhcmlvCmxhIGFjZXB0YWNpJm9hY3V0ZTtuIGRlIGxvcyBzaWd1aWVudGVzIHQmZWFjdXRlO3JtaW5vcy48L3A+CiAgICA8cD5Qb3IgZmF2b3IsIGxlYSBsYXMgc2lndWllbnRlcyBjb25kaWNpb25lcyBkZSBsaWNlbmNpYTo8L3A+CjxvbCBzdHlsZT0ibGlzdC1zdHlsZS10eXBlOmRlY2ltYWwiPgoKPGxpPkFjZXB0YW5kbyBlc3RhIGxpY2VuY2lhLCB1c3RlZCAoZWwgYXV0b3IvZXMgbyBlbCBwcm9waWV0YXJpby9zCiAgZGUgbG9zIGRlcmVjaG9zIGRlIGF1dG9yKSBnYXJhbnRpemEgYSBsYSBVbml2ZXJzaWRhZCBQb2xpdCZlYWN1dGU7Y25pY2EKICBkZSBDYXJ0YWdlbmEgIGVsIGRlcmVjaG8gbm8gZXhjbHVzaXZvIGRlIGFyY2hpdmFyLCByZXByb2R1Y2lyLAogIGNvbnZlcnRpciAoY29tbyBzZSBkZWZpbmUgbSZhYWN1dGU7cyBhYmFqbyksIGNvbXVuaWNhciB5L28gZGlzdHJpYnVpcgogIHN1IGRvY3VtZW50byBtdW5kaWFsbWVudGUgZW4gZm9ybWF0byBlbGVjdHImb2FjdXRlO25pY28uPC9saT4KPGJyLz4gIAo8bGk+VGFtYmkmYWN1dGU7biBlc3QmYWFjdXRlOyBkZSBhY3VlcmRvIGNvbiBxdWUgbGEgVW5pdmVyc2lkYWQgUG9saXQmZWFjdXRlO2NuaWNhCiAgZGUgQ2FydGFnZW5hICBwdWVkYSBjb25zZXJ2YXIgbSZhYWN1dGU7cyBkZSB1bmEgY29waWEgZGUgZXN0ZSBkb2N1bWVudG8geSwKICBzaW4gYWx0ZXJhciBzdSBjb250ZW5pZG8sIGNvbnZlcnRpcmxvIGEgY3VhbHF1aWVyIGZvcm1hdG8gZGUgZmljaGVybywKICBtZWRpbyBvIHNvcG9ydGUsIHBhcmEgcHJvcCZvYWN1dGU7c2l0b3MgZGUgc2VndXJpZGFkLCBwcmVzZXJ2YWNpJm9hY3V0ZTtuIHkgYWNjZXNvLjwvbGk+Cjxici8+CjxsaT5EZWNsYXJhIHF1ZSBlbCBkb2N1bWVudG8gZXMgdW4gdHJhYmFqbyBvcmlnaW5hbCBzdXlvIHkvbyBxdWUKICB0aWVuZSBlbCBkZXJlY2hvIHBhcmEgb3RvcmdhciBsb3MgZGVyZWNob3MgY29udGVuaWRvcyBlbiBlc3RhIGxpY2VuY2lhLgogIFRhbWJpJmVhY3V0ZTtuIGRlY2xhcmEgcXVlIHN1IGRvY3VtZW50byBubyBpbmZyaW5nZSwgZW4gdGFudG8gZW4gY3VhbnRvCiAgbGUgc2VhIHBvc2libGUgc2FiZXIsIGxvcyBkZXJlY2hvcyBkZSBhdXRvciBkZSBuaW5ndW5hIG90cmEgcGVyc29uYQogIG8gZW50aWRhZC4KPC9saT4KPGJyLz4KPGxpPlNpIGVsIGRvY3VtZW50byBjb250aWVuZSBtYXRlcmlhbGVzIGRlIGxvcyBjdWFsZXMgbm8gdGllbmUgbG9zCiAgZGVyZWNob3MgZGUgYXV0b3IsIGRlY2xhcmEgcXVlIGhhIG9idGVuaWRvIGVsIHBlcm1pc28gc2luIHJlc3RyaWNjaSZvYWN1dGU7bgogIGRlbCBwcm9waWV0YXJpbyBkZSBsb3MgZGVyZWNob3MgZGUgYXV0b3IgcGFyYSBvdG9yZ2FyIGEgbGEgVW5pdmVyc2lkYWQKICBQb2xpdCZhY3V0ZTtjbmljYSBkZSBDYXJ0YWdlbmEgIGxvcyBkZXJlY2hvcyByZXF1ZXJpZG9zIHBvciBlc3RhIGxpY2VuY2lhLCB5CiAgcXVlIGVzZSBtYXRlcmlhbCBjdXlvcyBkZXJlY2hvcyBzb24gZGUgdGVyY2Vyb3MgZXN0JmFhY3V0ZTsgY2xhcmFtZW50ZSBpZGVudGlmaWNhZG8KICB5IHJlY29ub2NpZG8gZW4gZWwgdGV4dG8gbyBjb250ZW5pZG8gZGVsIGRvY3VtZW50byBlbnRyZWdhZG8uPC9saT4KPGJyLz4KPGxpPlNpIGVsIGRvY3VtZW50byBzZSBiYXNhIGVuIHVuYSBvYnJhIHF1ZSBoYSBzaWRvIHBhdHJvY2luYWRhIG8gYXBveWFkYQogIHBvciB1bmEgYWdlbmNpYSB1IG9yZ2FuaXphY2kmb2FjdXRlO24gZGlmZXJlbnRlIGRlIGxhIFVuaXZlcnNpZGFkIFBvbGl0JmVhY3V0ZTtjbmljYQogIGRlIENhcnRhZ2VuYSwgc2UgcHJlc3Vwb25lIHF1ZSBzZSBoYSBjdW1wbGlkbyBjb24gY3VhbHF1aWVyIGRlcmVjaG8gZGUKICByZXZpc2kmb2FjdXRlO24gdSBvdHJhcyBvYmxpZ2FjaW9uZXMgcmVxdWVyaWRhcyBwb3IgZXN0ZSBjb250cmF0byBvIGFjdWVyZG8uPC9saT4KPGJyLz4gIAo8bGk+TGEgVW5pdmVyc2lkYWQgUG9saXQmZWFjdXRlO2NuaWNhIGRlIENhcnRhZ2VuYSAgaWRlbnRpZmljYXImYWFjdXRlOyBjbGFyYW1lbnRlIHN1L3MKICBub21icmUvcyBjb21vIGVsL2xvcyBhdXRvci9lcyBvIHByb3BpZXRhcmlvL3MgZGUgbG9zIGRlcmVjaG9zIGRlbCBkb2N1bWVudG8sCiAgeSBubyBoYXImYWFjdXRlOyBuaW5ndW5hIGFsdGVyYWNpJm9hY3V0ZTtuIGRlIHN1IGRvY3VtZW50byBkaWZlcmVudGUgYSBsYXMgcGVybWl0aWRhcwogIGVuIGVzdGEgbGljZW5jaWEuICA8L2xpPgo8YnIvPgogPC9vbD4gCiAgPC90ZD4KICA8L3RyPgo8L3RhYmxlPgo=</binData>
</mdWrap>
</rightsMD>
</amdSec>
<amdSec ID="FO_10317_1075_1">
<techMD ID="TECH_O_10317_1075_1">
<mdWrap MDTYPE="PREMIS">
<xmlData schemaLocation="http://www.loc.gov/standards/premis http://www.loc.gov/standards/premis/PREMIS-v1-0.xsd">
<premis:premis>
<premis:object>
<premis:objectIdentifier>
<premis:objectIdentifierType>URL</premis:objectIdentifierType>
<premis:objectIdentifierValue>https://repositorio.upct.es/bitstream/10317/1075/1/pfc3075.pdf</premis:objectIdentifierValue>
</premis:objectIdentifier>
<premis:objectCategory>File</premis:objectCategory>
<premis:objectCharacteristics>
<premis:fixity>
<premis:messageDigestAlgorithm>MD5</premis:messageDigestAlgorithm>
<premis:messageDigest>b13175aad4f9f9abb5f4944747345cd9</premis:messageDigest>
</premis:fixity>
<premis:size>1236867</premis:size>
<premis:format>
<premis:formatDesignation>
<premis:formatName>application/pdf</premis:formatName>
</premis:formatDesignation>
</premis:format>
</premis:objectCharacteristics>
<premis:originalName>pfc3075.pdf</premis:originalName>
</premis:object>
</premis:premis>
</xmlData>
</mdWrap>
</techMD>
</amdSec>
<amdSec ID="FT_10317_1075_6">
<techMD ID="TECH_T_10317_1075_6">
<mdWrap MDTYPE="PREMIS">
<xmlData schemaLocation="http://www.loc.gov/standards/premis http://www.loc.gov/standards/premis/PREMIS-v1-0.xsd">
<premis:premis>
<premis:object>
<premis:objectIdentifier>
<premis:objectIdentifierType>URL</premis:objectIdentifierType>
<premis:objectIdentifierValue>https://repositorio.upct.es/bitstream/10317/1075/6/pfc3075.pdf.txt</premis:objectIdentifierValue>
</premis:objectIdentifier>
<premis:objectCategory>File</premis:objectCategory>
<premis:objectCharacteristics>
<premis:fixity>
<premis:messageDigestAlgorithm>MD5</premis:messageDigestAlgorithm>
<premis:messageDigest>de6a144559f7f4897851e36b717f283c</premis:messageDigest>
</premis:fixity>
<premis:size>96438</premis:size>
<premis:format>
<premis:formatDesignation>
<premis:formatName>text/plain</premis:formatName>
</premis:formatDesignation>
</premis:format>
</premis:objectCharacteristics>
<premis:originalName>pfc3075.pdf.txt</premis:originalName>
</premis:object>
</premis:premis>
</xmlData>
</mdWrap>
</techMD>
</amdSec>
<fileSec>
<fileGrp USE="ORIGINAL">
<file ADMID="FO_10317_1075_1" CHECKSUM="b13175aad4f9f9abb5f4944747345cd9" CHECKSUMTYPE="MD5" GROUPID="GROUP_BITSTREAM_10317_1075_1" ID="BITSTREAM_ORIGINAL_10317_1075_1" MIMETYPE="application/pdf" SEQ="1" SIZE="1236867">
</file>
</fileGrp>
<fileGrp USE="TEXT">
<file ADMID="FT_10317_1075_6" CHECKSUM="de6a144559f7f4897851e36b717f283c" CHECKSUMTYPE="MD5" GROUPID="GROUP_BITSTREAM_10317_1075_6" ID="BITSTREAM_TEXT_10317_1075_6" MIMETYPE="text/plain" SEQ="6" SIZE="96438">
</file>
</fileGrp>
</fileSec>
<structMap LABEL="DSpace Object" TYPE="LOGICAL">
<div ADMID="DMD_10317_1075" TYPE="DSpace Object Contents">
<div TYPE="DSpace BITSTREAM">
</div>
</div>
</structMap>
</mets>
<?xml version="1.0" encoding="UTF-8" ?>
<mods:mods schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-1.xsd">
<mods:name>
<mods:namePart>Bernal Rodríguez, Antonio Patricio</mods:namePart>
</mods:name>
<mods:extension>
<mods:dateAvailable encoding="iso8601">2009-07-09T07:47:10Z</mods:dateAvailable>
</mods:extension>
<mods:extension>
<mods:dateAccessioned encoding="iso8601">2009-07-09T07:47:10Z</mods:dateAccessioned>
</mods:extension>
<mods:originInfo>
<mods:dateIssued encoding="iso8601">2009-04-21</mods:dateIssued>
</mods:originInfo>
<mods:identifier type="uri">http://hdl.handle.net/10317/1075</mods:identifier>
<mods:abstract>The aim of this work is to obtain the three-dimensional location of people in a scene using the information provided by several cameras; these cameras must have a range of view in common. The place chosen to record the videos for this work is the Nexus Lab in the Universität Stuttgärt. In this work, it is possible to distinguish two main tasks: The first task is to obtain a model of the camera which let us turn the twodimensional information from the images taken from the cameras, in three-dimensional information in a known reference-frame; this topic is studied in chapter 2. The second task is to obtain relevant information from the images of each camera, that is, the location of the people, which appear in an image scene, must be known to compute their three-dimensional location. This information will be obtained by reviewing different types of methods. This topic is described in chapter 3. Finally, the three-dimensional location of the people in the room has been accomplished by integrating the two-dimensional information obtained from the images of each camera with the model of the cameras and knowing their locations in the room. In order to quantify the measurement error of the system, several video tests have been recorded. The error analysis is accomplished in chapter 4.</mods:abstract>
<mods:language>
<mods:languageTerm>eng</mods:languageTerm>
</mods:language>
<mods:accessCondition type="useAndReproduction">http://creativecommons.org/licenses/by-nc-nd/3.0/es/</mods:accessCondition>
<mods:accessCondition type="useAndReproduction">info:eu-repo/semantics/openAccess</mods:accessCondition>
<mods:accessCondition type="useAndReproduction">Atribución-NoComercial-SinDerivadas 3.0 España</mods:accessCondition>
<mods:subject>
<mods:topic>3D</mods:topic>
</mods:subject>
<mods:subject>
<mods:topic>Three dimensional space</mods:topic>
</mods:subject>
<mods:subject>
<mods:topic>Camera</mods:topic>
</mods:subject>
<mods:subject>
<mods:topic>Multisensor integration</mods:topic>
</mods:subject>
<mods:subject>
<mods:topic>Espacio tridimensional</mods:topic>
</mods:subject>
<mods:subject>
<mods:topic>Cámara</mods:topic>
</mods:subject>
<mods:subject>
<mods:topic>Multisensor de integración</mods:topic>
</mods:subject>
<mods:titleInfo>
<mods:title>3D detection of people</mods:title>
</mods:titleInfo>
<mods:genre>info:eu-repo/semantics/bachelorThesis</mods:genre>
</mods:mods>
<?xml version="1.0" encoding="UTF-8" ?>
<atom:entry schemaLocation="http://www.w3.org/2005/Atom http://www.kbcafe.com/rss/atom.xsd.xml">
<atom:id>http://hdl.handle.net/10317/1075/ore.xml</atom:id>
<atom:published>2009-07-09T07:47:10Z</atom:published>
<atom:updated>2009-07-09T07:47:10Z</atom:updated>
<atom:source>
<atom:generator>REPOSITORIO DIGITAL UPCT</atom:generator>
</atom:source>
<atom:title>3D detection of people</atom:title>
<atom:author>
<atom:name>Bernal Rodríguez, Antonio Patricio</atom:name>
</atom:author>
<oreatom:triples>
<rdf:Description about="http://hdl.handle.net/10317/1075/ore.xml#atom">
<dcterms:modified>2009-07-09T07:47:10Z</dcterms:modified>
</rdf:Description>
<rdf:Description about="https://repositorio.upct.es/bitstream/10317/1075/6/pfc3075.pdf.txt">
<dcterms:description>TEXT</dcterms:description>
</rdf:Description>
<rdf:Description about="https://repositorio.upct.es/bitstream/10317/1075/1/pfc3075.pdf">
<dcterms:description>ORIGINAL</dcterms:description>
</rdf:Description>
<rdf:Description about="https://repositorio.upct.es/bitstream/10317/1075/2/license_url">
<dcterms:description>CC-LICENSE</dcterms:description>
</rdf:Description>
<rdf:Description about="https://repositorio.upct.es/bitstream/10317/1075/3/license_text">
<dcterms:description>CC-LICENSE</dcterms:description>
</rdf:Description>
<rdf:Description about="https://repositorio.upct.es/bitstream/10317/1075/4/license_rdf">
<dcterms:description>CC-LICENSE</dcterms:description>
</rdf:Description>
<rdf:Description about="https://repositorio.upct.es/bitstream/10317/1075/5/license.txt">
<dcterms:description>LICENSE</dcterms:description>
</rdf:Description>
</oreatom:triples>
</atom:entry>
<?xml version="1.0" encoding="UTF-8" ?>
<qdc:qualifieddc schemaLocation="http://purl.org/dc/elements/1.1/ http://dublincore.org/schemas/xmls/qdc/2006/01/06/dc.xsd http://purl.org/dc/terms/ http://dublincore.org/schemas/xmls/qdc/2006/01/06/dcterms.xsd http://dspace.org/qualifieddc/ http://www.ukoln.ac.uk/metadata/dcmi/xmlschema/qualifieddc.xsd">
<dc:title>3D detection of people</dc:title>
<dc:creator>Bernal Rodríguez, Antonio Patricio</dc:creator>
<dc:contributor>Roca Nieto, Lucas</dc:contributor>
<dc:contributor>Expresión Gráfica</dc:contributor>
<dc:subject>3D</dc:subject>
<dc:subject>Three dimensional space</dc:subject>
<dc:subject>Camera</dc:subject>
<dc:subject>Multisensor integration</dc:subject>
<dc:subject>Espacio tridimensional</dc:subject>
<dc:subject>Cámara</dc:subject>
<dc:subject>Multisensor de integración</dc:subject>
<dcterms:abstract>The aim of this work is to obtain the three-dimensional location of people in a scene using the information provided by several cameras; these cameras must have a range of view in common. The place chosen to record the videos for this work is the Nexus Lab in the Universität Stuttgärt. In this work, it is possible to distinguish two main tasks: The first task is to obtain a model of the camera which let us turn the twodimensional information from the images taken from the cameras, in three-dimensional information in a known reference-frame; this topic is studied in chapter 2. The second task is to obtain relevant information from the images of each camera, that is, the location of the people, which appear in an image scene, must be known to compute their three-dimensional location. This information will be obtained by reviewing different types of methods. This topic is described in chapter 3. Finally, the three-dimensional location of the people in the room has been accomplished by integrating the two-dimensional information obtained from the images of each camera with the model of the cameras and knowing their locations in the room. In order to quantify the measurement error of the system, several video tests have been recorded. The error analysis is accomplished in chapter 4.</dcterms:abstract>
<dcterms:dateAccepted>2009-07-09T07:47:10Z</dcterms:dateAccepted>
<dcterms:available>2009-07-09T07:47:10Z</dcterms:available>
<dcterms:created>2009-07-09T07:47:10Z</dcterms:created>
<dcterms:issued>2009-04-21</dcterms:issued>
<dc:type>info:eu-repo/semantics/bachelorThesis</dc:type>
<dc:identifier>http://hdl.handle.net/10317/1075</dc:identifier>
<dc:language>eng</dc:language>
<dc:rights>http://creativecommons.org/licenses/by-nc-nd/3.0/es/</dc:rights>
<dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
<dc:rights>Atribución-NoComercial-SinDerivadas 3.0 España</dc:rights>
<dc:publisher>Universidad Politécnica de Cartagena</dc:publisher>
</qdc:qualifieddc>
<?xml version="1.0" encoding="UTF-8" ?>
<rdf:RDF schemaLocation="http://www.openarchives.org/OAI/2.0/rdf/ http://www.openarchives.org/OAI/2.0/rdf.xsd">
<ow:Publication about="oai:repositorio.upct.es:10317/1075">
<dc:title>3D detection of people</dc:title>
<dc:creator>Bernal Rodríguez, Antonio Patricio</dc:creator>
<dc:contributor>Roca Nieto, Lucas</dc:contributor>
<dc:contributor>Expresión Gráfica</dc:contributor>
<dc:subject>3D</dc:subject>
<dc:subject>Three dimensional space</dc:subject>
<dc:subject>Camera</dc:subject>
<dc:subject>Multisensor integration</dc:subject>
<dc:subject>Espacio tridimensional</dc:subject>
<dc:subject>Cámara</dc:subject>
<dc:subject>Multisensor de integración</dc:subject>
<dc:description>The aim of this work is to obtain the three-dimensional location of people in a scene using the information provided by several cameras; these cameras must have a range of view in common. The place chosen to record the videos for this work is the Nexus Lab in the Universität Stuttgärt. In this work, it is possible to distinguish two main tasks: The first task is to obtain a model of the camera which let us turn the twodimensional information from the images taken from the cameras, in three-dimensional information in a known reference-frame; this topic is studied in chapter 2. The second task is to obtain relevant information from the images of each camera, that is, the location of the people, which appear in an image scene, must be known to compute their three-dimensional location. This information will be obtained by reviewing different types of methods. This topic is described in chapter 3. Finally, the three-dimensional location of the people in the room has been accomplished by integrating the two-dimensional information obtained from the images of each camera with the model of the cameras and knowing their locations in the room. In order to quantify the measurement error of the system, several video tests have been recorded. The error analysis is accomplished in chapter 4.</dc:description>
<dc:date>2009-07-09T07:47:10Z</dc:date>
<dc:date>2009-07-09T07:47:10Z</dc:date>
<dc:date>2009-04-21</dc:date>
<dc:type>info:eu-repo/semantics/bachelorThesis</dc:type>
<dc:identifier>http://hdl.handle.net/10317/1075</dc:identifier>
<dc:language>eng</dc:language>
<dc:rights>http://creativecommons.org/licenses/by-nc-nd/3.0/es/</dc:rights>
<dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
<dc:rights>Atribución-NoComercial-SinDerivadas 3.0 España</dc:rights>
<dc:publisher>Universidad Politécnica de Cartagena</dc:publisher>
</ow:Publication>
</rdf:RDF>
<?xml version="1.0" encoding="UTF-8" ?>
<uketd_dc:uketddc schemaLocation="http://naca.central.cranfield.ac.uk/ethos-oai/2.0/ http://naca.central.cranfield.ac.uk/ethos-oai/2.0/uketd_dc.xsd">
<dc:title>3D detection of people</dc:title>
<dc:creator>Bernal Rodríguez, Antonio Patricio</dc:creator>
<uketdterms:advisor>Roca Nieto, Lucas</uketdterms:advisor>
<dcterms:abstract>The aim of this work is to obtain the three-dimensional location of people in a scene using the information provided by several cameras; these cameras must have a range of view in common. The place chosen to record the videos for this work is the Nexus Lab in the Universität Stuttgärt. In this work, it is possible to distinguish two main tasks: The first task is to obtain a model of the camera which let us turn the twodimensional information from the images taken from the cameras, in three-dimensional information in a known reference-frame; this topic is studied in chapter 2. The second task is to obtain relevant information from the images of each camera, that is, the location of the people, which appear in an image scene, must be known to compute their three-dimensional location. This information will be obtained by reviewing different types of methods. This topic is described in chapter 3. Finally, the three-dimensional location of the people in the room has been accomplished by integrating the two-dimensional information obtained from the images of each camera with the model of the cameras and knowing their locations in the room. In order to quantify the measurement error of the system, several video tests have been recorded. The error analysis is accomplished in chapter 4.</dcterms:abstract>
<uketdterms:institution>Universidad Politécnica de Cartagena</uketdterms:institution>
<dcterms:issued>2009-04-21</dcterms:issued>
<dc:type>info:eu-repo/semantics/bachelorThesis</dc:type>
<dc:language type="dcterms:ISO639-2">eng</dc:language>
<dcterms:isReferencedBy>http://hdl.handle.net/10317/1075</dcterms:isReferencedBy>
<dcterms:hasFormat>https://repositorio.upct.es/bitstream/10317/1075/6/pfc3075.pdf.txt</dcterms:hasFormat>
<uketdterms:checksum type="uketdterms:MD5">de6a144559f7f4897851e36b717f283c</uketdterms:checksum>
<dc:identifier type="dcterms:URI">https://repositorio.upct.es/bitstream/10317/1075/1/pfc3075.pdf</dc:identifier>
<uketdterms:checksum type="uketdterms:MD5">b13175aad4f9f9abb5f4944747345cd9</uketdterms:checksum>
<dcterms:license>https://repositorio.upct.es/bitstream/10317/1075/5/license.txt</dcterms:license>
<uketdterms:checksum type="uketdterms:MD5">43479ce05bd42437a9dee2a29a9d9fcd</uketdterms:checksum>
<dc:rights>Atribución-NoComercial-SinDerivadas 3.0 España</dc:rights>
<dc:subject>3D</dc:subject>
<dc:subject>Three dimensional space</dc:subject>
<dc:subject>Camera</dc:subject>
<dc:subject>Multisensor integration</dc:subject>
<dc:subject>Espacio tridimensional</dc:subject>
<dc:subject>Cámara</dc:subject>
<dc:subject>Multisensor de integración</dc:subject>
<dc:subject>Expresión Gráfica en Ingeniería</dc:subject>
</uketd_dc:uketddc>
<?xml version="1.0" encoding="UTF-8" ?>
<metadata schemaLocation="http://www.lyncode.com/xoai http://www.lyncode.com/xsd/xoai.xsd">
<element name="dc">
<element name="contributor">
<element name="author">
<element name="none">
<field name="value">Bernal Rodríguez, Antonio Patricio</field>
<field name="authority">ae9647aa-2cde-4ff1-ba33-3789803005fc</field>
<field name="confidence">500</field>
</element>
</element>
<element name="advisor">
<element name="none">
<field name="value">Roca Nieto, Lucas</field>
<field name="authority">315</field>
<field name="confidence">600</field>
</element>
</element>
<element name="department">
<element name="es">
<field name="value">Expresión Gráfica</field>
</element>
</element>
</element>
<element name="date">
<element name="accessioned">
<element name="none">
<field name="value">2009-07-09T07:47:10Z</field>
</element>
</element>
<element name="available">
<element name="none">
<field name="value">2009-07-09T07:47:10Z</field>
</element>
</element>
<element name="issued">
<element name="none">
<field name="value">2009-04-21</field>
</element>
</element>
</element>
<element name="description">
<element name="abstract">
<element name="es">
<field name="value">The aim of this work is to obtain the three-dimensional location of people in a scene using the information provided by several cameras; these cameras must have a range of view in common. The place chosen to record the videos for this work is the Nexus Lab in the Universität Stuttgärt. In this work, it is possible to distinguish two main tasks: The first task is to obtain a model of the camera which let us turn the twodimensional information from the images taken from the cameras, in three-dimensional information in a known reference-frame; this topic is studied in chapter 2. The second task is to obtain relevant information from the images of each camera, that is, the location of the people, which appear in an image scene, must be known to compute their three-dimensional location. This information will be obtained by reviewing different types of methods. This topic is described in chapter 3. Finally, the three-dimensional location of the people in the room has been accomplished by integrating the two-dimensional information obtained from the images of each camera with the model of the cameras and knowing their locations in the room. In order to quantify the measurement error of the system, several video tests have been recorded. The error analysis is accomplished in chapter 4.</field>
</element>
</element>
<element name="centro">
<element name="es">
<field name="value">Escuela Técnica Superior de Ingeniería Industrial</field>
</element>
</element>
</element>
<element name="format">
<element name="en">
<field name="value">application/pdf</field>
</element>
</element>
<element name="language">
<element name="iso">
<element name="es">
<field name="value">eng</field>
</element>
</element>
</element>
<element name="publisher">
<element name="es">
<field name="value">Universidad Politécnica de Cartagena</field>
</element>
</element>
<element name="rights">
<element name="*">
<field name="value">Atribución-NoComercial-SinDerivadas 3.0 España</field>
</element>
<element name="uri">
<element name="*">
<field name="value">http://creativecommons.org/licenses/by-nc-nd/3.0/es/</field>
</element>
</element>
<element name="accessRights">
<element name="es">
<field name="value">info:eu-repo/semantics/openAccess</field>
</element>
</element>
</element>
<element name="title">
<element name="es">
<field name="value">3D detection of people</field>
</element>
</element>
<element name="type">
<element name="es">
<field name="value">info:eu-repo/semantics/bachelorThesis</field>
</element>
</element>
<element name="subject">
<element name="other">
<element name="es_ES">
<field name="value">Expresión Gráfica en Ingeniería</field>
</element>
</element>
<element name="es">
<field name="value">3D</field>
<field name="value">Three dimensional space</field>
<field name="value">Camera</field>
<field name="value">Multisensor integration</field>
</element>
<element name="none">
<field name="value">Espacio tridimensional</field>
<field name="value">Cámara</field>
<field name="value">Multisensor de integración</field>
</element>
</element>
<element name="identifier">
<element name="uri">
<element name="none">
<field name="value">http://hdl.handle.net/10317/1075</field>
</element>
</element>
</element>
</element>
<element name="bundles">
<element name="bundle">
<field name="name">TEXT</field>
<element name="bitstreams">
<element name="bitstream">
<field name="name">pfc3075.pdf.txt</field>
<field name="originalName">pfc3075.pdf.txt</field>
<field name="description">Extracted text</field>
<field name="format">text/plain</field>
<field name="size">96438</field>
<field name="url">https://repositorio.upct.es/bitstream/10317/1075/6/pfc3075.pdf.txt</field>
<field name="checksum">de6a144559f7f4897851e36b717f283c</field>
<field name="checksumAlgorithm">MD5</field>
<field name="sid">6</field>
</element>
</element>
</element>
<element name="bundle">
<field name="name">ORIGINAL</field>
<element name="bitstreams">
<element name="bitstream">
<field name="name">pfc3075.pdf</field>
<field name="originalName">pfc3075.pdf</field>
<field name="format">application/pdf</field>
<field name="size">1236867</field>
<field name="url">https://repositorio.upct.es/bitstream/10317/1075/1/pfc3075.pdf</field>
<field name="checksum">b13175aad4f9f9abb5f4944747345cd9</field>
<field name="checksumAlgorithm">MD5</field>
<field name="sid">1</field>
</element>
</element>
</element>
<element name="bundle">
<field name="name">CC-LICENSE</field>
<element name="bitstreams">
<element name="bitstream">
<field name="name">license_url</field>
<field name="originalName">license_url</field>
<field name="format">text/plain</field>
<field name="size">52</field>
<field name="url">https://repositorio.upct.es/bitstream/10317/1075/2/license_url</field>
<field name="checksum">81d062cbbb61e4032033a8fbcbca52da</field>
<field name="checksumAlgorithm">MD5</field>
<field name="sid">2</field>
</element>
<element name="bitstream">
<field name="name">license_text</field>
<field name="originalName">license_text</field>
<field name="format">text/html; charset=utf-8</field>
<field name="size">15057</field>
<field name="url">https://repositorio.upct.es/bitstream/10317/1075/3/license_text</field>
<field name="checksum">55a563026c43ae7ba947ca23796406b7</field>
<field name="checksumAlgorithm">MD5</field>
<field name="sid">3</field>
</element>
<element name="bitstream">
<field name="name">license_rdf</field>
<field name="originalName">license_rdf</field>
<field name="format">application/rdf+xml; charset=utf-8</field>
<field name="size">5382</field>
<field name="url">https://repositorio.upct.es/bitstream/10317/1075/4/license_rdf</field>
<field name="checksum">61aba5cc326f38695ae8b44f80d2c936</field>
<field name="checksumAlgorithm">MD5</field>
<field name="sid">4</field>
</element>
</element>
</element>
<element name="bundle">
<field name="name">LICENSE</field>
<element name="bitstreams">
<element name="bitstream">
<field name="name">license.txt</field>
<field name="originalName">license.txt</field>
<field name="format">text/plain</field>
<field name="size">2720</field>
<field name="url">https://repositorio.upct.es/bitstream/10317/1075/5/license.txt</field>
<field name="checksum">43479ce05bd42437a9dee2a29a9d9fcd</field>
<field name="checksumAlgorithm">MD5</field>
<field name="sid">5</field>
</element>
</element>
</element>
</element>
<element name="others">
<field name="handle">10317/1075</field>
<field name="identifier">oai:repositorio.upct.es:10317/1075</field>
<field name="lastModifyDate">2018-04-11 12:09:43.662</field>
</element>
<element name="repository">
<field name="name">REPOSITORIO DIGITAL UPCT</field>
<field name="mail">repositorio@bib.upct.es</field>
</element>
<element name="license">
<field name="bin">TGljZW5zZSBncmFudGVkIGJ5IEFSQUNFTEkgVcORQSBTQUxDRURBIChhcmFjZWxpLnNhbGNlZGFAYmliLnVwY3QuZXMpIG9uIDIwMDktMDctMDdUMTA6NTM6MTBaIChHTVQpOgoKPHRhYmxlIHdpZHRoPSI1NzciIHN0eWxlPSJib3JkZXItYm90dG9tOnRoaW47Ym9yZGVyLXRvcDp0aGluO2JvcmRlci1sZWZ0OnRoaW47Ym9yZGVyLXJpZ2h0OnRoaW4iPgogIDx0cj4KICAgIDx0ZCBzdHlsZT0idGV4dC1hbGlnbjpqdXN0aWZ5Ij4KCTxzdHJvbmc+TGljZW5jaWEgZGUgRGlzdHJpYnVjaSZvYWN1dGU7biBObyBFeGNsdXNpdmE8L3N0cm9uZz4KCiAgICA8cD5QYXJhIHF1ZSBlbCBSZXBvc2l0b3JpbyBkZSBsYSBVbml2ZXJzaWRhZCBQb2xpdCZlYWN1dGU7Y25pY2EgZGUgQ2FydGFnZW5hCnB1ZWRhIHJlcHJvZHVjaXIgeSBjb211bmljYXIgcCZ1YWN1dGU7YmxpY2FtZW50ZSBzdSBkb2N1bWVudG8gZXMgbmVjZXNhcmlvCmxhIGFjZXB0YWNpJm9hY3V0ZTtuIGRlIGxvcyBzaWd1aWVudGVzIHQmZWFjdXRlO3JtaW5vcy48L3A+CiAgICA8cD5Qb3IgZmF2b3IsIGxlYSBsYXMgc2lndWllbnRlcyBjb25kaWNpb25lcyBkZSBsaWNlbmNpYTo8L3A+CjxvbCBzdHlsZT0ibGlzdC1zdHlsZS10eXBlOmRlY2ltYWwiPgoKPGxpPkFjZXB0YW5kbyBlc3RhIGxpY2VuY2lhLCB1c3RlZCAoZWwgYXV0b3IvZXMgbyBlbCBwcm9waWV0YXJpby9zCiAgZGUgbG9zIGRlcmVjaG9zIGRlIGF1dG9yKSBnYXJhbnRpemEgYSBsYSBVbml2ZXJzaWRhZCBQb2xpdCZlYWN1dGU7Y25pY2EKICBkZSBDYXJ0YWdlbmEgIGVsIGRlcmVjaG8gbm8gZXhjbHVzaXZvIGRlIGFyY2hpdmFyLCByZXByb2R1Y2lyLAogIGNvbnZlcnRpciAoY29tbyBzZSBkZWZpbmUgbSZhYWN1dGU7cyBhYmFqbyksIGNvbXVuaWNhciB5L28gZGlzdHJpYnVpcgogIHN1IGRvY3VtZW50byBtdW5kaWFsbWVudGUgZW4gZm9ybWF0byBlbGVjdHImb2FjdXRlO25pY28uPC9saT4KPGJyLz4gIAo8bGk+VGFtYmkmYWN1dGU7biBlc3QmYWFjdXRlOyBkZSBhY3VlcmRvIGNvbiBxdWUgbGEgVW5pdmVyc2lkYWQgUG9saXQmZWFjdXRlO2NuaWNhCiAgZGUgQ2FydGFnZW5hICBwdWVkYSBjb25zZXJ2YXIgbSZhYWN1dGU7cyBkZSB1bmEgY29waWEgZGUgZXN0ZSBkb2N1bWVudG8geSwKICBzaW4gYWx0ZXJhciBzdSBjb250ZW5pZG8sIGNvbnZlcnRpcmxvIGEgY3VhbHF1aWVyIGZvcm1hdG8gZGUgZmljaGVybywKICBtZWRpbyBvIHNvcG9ydGUsIHBhcmEgcHJvcCZvYWN1dGU7c2l0b3MgZGUgc2VndXJpZGFkLCBwcmVzZXJ2YWNpJm9hY3V0ZTtuIHkgYWNjZXNvLjwvbGk+Cjxici8+CjxsaT5EZWNsYXJhIHF1ZSBlbCBkb2N1bWVudG8gZXMgdW4gdHJhYmFqbyBvcmlnaW5hbCBzdXlvIHkvbyBxdWUKICB0aWVuZSBlbCBkZXJlY2hvIHBhcmEgb3RvcmdhciBsb3MgZGVyZWNob3MgY29udGVuaWRvcyBlbiBlc3RhIGxpY2VuY2lhLgogIFRhbWJpJmVhY3V0ZTtuIGRlY2xhcmEgcXVlIHN1IGRvY3VtZW50byBubyBpbmZyaW5nZSwgZW4gdGFudG8gZW4gY3VhbnRvCiAgbGUgc2VhIHBvc2libGUgc2FiZXIsIGxvcyBkZXJlY2hvcyBkZSBhdXRvciBkZSBuaW5ndW5hIG90cmEgcGVyc29uYQogIG8gZW50aWRhZC4KPC9saT4KPGJyLz4KPGxpPlNpIGVsIGRvY3VtZW50byBjb250aWVuZSBtYXRlcmlhbGVzIGRlIGxvcyBjdWFsZXMgbm8gdGllbmUgbG9zCiAgZGVyZWNob3MgZGUgYXV0b3IsIGRlY2xhcmEgcXVlIGhhIG9idGVuaWRvIGVsIHBlcm1pc28gc2luIHJlc3RyaWNjaSZvYWN1dGU7bgogIGRlbCBwcm9waWV0YXJpbyBkZSBsb3MgZGVyZWNob3MgZGUgYXV0b3IgcGFyYSBvdG9yZ2FyIGEgbGEgVW5pdmVyc2lkYWQKICBQb2xpdCZhY3V0ZTtjbmljYSBkZSBDYXJ0YWdlbmEgIGxvcyBkZXJlY2hvcyByZXF1ZXJpZG9zIHBvciBlc3RhIGxpY2VuY2lhLCB5CiAgcXVlIGVzZSBtYXRlcmlhbCBjdXlvcyBkZXJlY2hvcyBzb24gZGUgdGVyY2Vyb3MgZXN0JmFhY3V0ZTsgY2xhcmFtZW50ZSBpZGVudGlmaWNhZG8KICB5IHJlY29ub2NpZG8gZW4gZWwgdGV4dG8gbyBjb250ZW5pZG8gZGVsIGRvY3VtZW50byBlbnRyZWdhZG8uPC9saT4KPGJyLz4KPGxpPlNpIGVsIGRvY3VtZW50byBzZSBiYXNhIGVuIHVuYSBvYnJhIHF1ZSBoYSBzaWRvIHBhdHJvY2luYWRhIG8gYXBveWFkYQogIHBvciB1bmEgYWdlbmNpYSB1IG9yZ2FuaXphY2kmb2FjdXRlO24gZGlmZXJlbnRlIGRlIGxhIFVuaXZlcnNpZGFkIFBvbGl0JmVhY3V0ZTtjbmljYQogIGRlIENhcnRhZ2VuYSwgc2UgcHJlc3Vwb25lIHF1ZSBzZSBoYSBjdW1wbGlkbyBjb24gY3VhbHF1aWVyIGRlcmVjaG8gZGUKICByZXZpc2kmb2FjdXRlO24gdSBvdHJhcyBvYmxpZ2FjaW9uZXMgcmVxdWVyaWRhcyBwb3IgZXN0ZSBjb250cmF0byBvIGFjdWVyZG8uPC9saT4KPGJyLz4gIAo8bGk+TGEgVW5pdmVyc2lkYWQgUG9saXQmZWFjdXRlO2NuaWNhIGRlIENhcnRhZ2VuYSAgaWRlbnRpZmljYXImYWFjdXRlOyBjbGFyYW1lbnRlIHN1L3MKICBub21icmUvcyBjb21vIGVsL2xvcyBhdXRvci9lcyBvIHByb3BpZXRhcmlvL3MgZGUgbG9zIGRlcmVjaG9zIGRlbCBkb2N1bWVudG8sCiAgeSBubyBoYXImYWFjdXRlOyBuaW5ndW5hIGFsdGVyYWNpJm9hY3V0ZTtuIGRlIHN1IGRvY3VtZW50byBkaWZlcmVudGUgYSBsYXMgcGVybWl0aWRhcwogIGVuIGVzdGEgbGljZW5jaWEuICA8L2xpPgo8YnIvPgogPC9vbD4gCiAgPC90ZD4KICA8L3RyPgo8L3RhYmxlPgo=</field>
</element>
</metadata>