Ints
类作为 Guava 对 Java 基本数据类型 int
的工具类封装。1 | //在 Java 8 中可以被 Integer.BYTES 替代,代表字节数 bytes |
1 | // int 最大存储值,使用 1<<(Integer.SIZE-2) 计算得到,2是因为一位代表正负,1同时占一位 |
计算传入 int 值的 hashcode。与 JDK 8 中 Integer.hashCode(int)
一样,直接返回数值本身。
1 | public static int hashCode(int value) { |
将 long 强制类型转换为 int。
当传入参数超出 int 的取值范围 [2^31-1, -2^31] 时,抛出 IllegalArgumentException
异常,否则返回参数本身。
1 | public static int checkedCast(long value) { |
将 long 转换成 int 范围内的值。
与 checkedCast
不同的是,当传入参数大于 int 最大值或小于 int 最小值,直接返回最大值或最大值,而不是抛异常。
1 | public static int saturatedCast(long value) { |
比较两个 int 值的大小,等价于 ((Integer) a).compareTo(b)
。返回值分为三种情况: a 小于/等于/大于 b 时分别返回 -1/0/1。
注:JDK 7 及以后建议使用 Integer.compar(int, int)
而非此方法。
1 | public static int compare(int a, int b) { |
判断给定
1 | public static boolean contains(int[] array, int target) { |
1 | public static int indexOf(int[] array, int target) { |
1 | private static int indexOf(int[] array, int target, int start, int end) { |
1 | public static int indexOf(int[] array, int[] target) { |
1 | public static int lastIndexOf(int[] array, int target) { |
1 | private static int lastIndexOf(int[] array, int target, int start, int end) { |
1 | public static int min(int... array) { |
1 | public static int max(int... array) { |
1 | public static int constrainToRange(int value, int min, int max) { |
1 | public static int[] concat(int[]... arrays) { |
1 | public static byte[] toByteArray(int value) { |
1 | public static int fromByteArray(byte[] bytes) { |
1 | public static int fromBytes(byte b1, byte b2, byte b3, byte b4) { |
1 | public static Converter<String, Integer> stringConverter() { |
1 | public static int[] ensureCapacity(int[] array, int minLength, int padding) { |
1 | public static String join(String separator, int... array) { |
1 | /** |
1 | public static void sortDescending(int[] array, int fromIndex, int toIndex) { |
1 | public static void reverse(int[] array) { |
1 | public static void reverse(int[] array, int fromIndex, int toIndex) { |
1 | public static int[] toArray(Collection<? extends Number> collection) { |
1 | public static List<Integer> asList(int... backingArray) { |
1 | public static Integer tryParse(String string) { |
1 | public static Integer tryParse(String string, int radix) { |
为了持续提升 Mockito 以及更进一步的提升单元测试体验,我们希望你升级到 Mockito 2.1.0。Mockito 遵循语意化的版本控制,除非有非常大的改变才会变化主版本号。在一个库的生命周期中,为了引入一系列有用的特性,修改已存在的行为或者 API 等重大变更是在所难免的。有关新版本(包括不兼容的更改)的全面指南,请参阅”Mockito 2” wiki 页面中的 “What’s new in Mockito 2”。我们希望你能够喜欢 Mockito 2.0!
0.1. Mockito Android support
With Mockito version 2.6.1 we ship “native” Android support. To enable Android support, add the mockito-android
library as dependency to your project. This artifact is published to the same Mockito organization and can be imported for Android as follows: You can continue to run the same unit tests on a regular VM by using the mockito-core
artifact in your “testCompile” scope as shown above. Be aware that you cannot use the inline mock maker on Android due to limitations in the Android VM. If you encounter issues with mocking on Android, please open an issue on the official issue tracker. Do provide the version of Android you are working on and dependencies of your project.
0.2. Configuration-free inline mock making
Starting with version 2.7.6, we offer the ‘mockito-inline’ artifact that enables inline mock making without configuring the MockMaker extension file. To use this, add the mockito-inline
instead of the mockito-core
artifact as follows: Be aware that this artifact may be abolished when the inline mock making feature is integrated into the default mock maker.
跟着我们的示例来 mock 一个 List,因为大家对 List 接口很熟悉(例如 add(),get(), clear())。事实上,不要 mock List 接口本身,而要使用 List 的一个实例来替代。
1 | // 静态导入会使代码更简洁 |
一旦 mock 对象被创建了,mock 对象会记住所有的交互。然后你就可能选择性地验证你感兴趣的交互。
1 | // 你可以 mock 具体的类型,不仅只是接口 |
Mockito 以自然的 java 风格来验证参数值: 使用 equals() 函数。有时当需要额外的灵活性时你可能需要使用参数匹配器 argument matchers :
1 | // 使用内置的 anyInt() 参数匹配器 |
参数匹配器使验证和测试桩变得更灵活。点击这里查看更多内置的匹配器以及自定义参数匹配器或者 hamcrest 匹配器的示例。
如果仅仅是获取自定义参数匹配器的信息,查看ArgumentMatcher类文档即可。
为了合理的使用复杂的参数匹配,使用 equals() 与 anyX() 的匹配器会使得测试代码更简洁、简单。
有时,会迫使你重构代码以使用 equals() 匹配或者实现 equals() 函数来帮助你进行测试。
同时建议你阅读第15章节或者ArgumentCaptor类文档。ArgumentCaptor 是一个能够捕获参数值的特殊参数匹配器。
参数匹配器的注意点 :
如果你使用参数匹配器,所有参数都必须由匹配器提供。
示例 : ( 该示例展示了如何多次应用于测试桩函数的验证 )
1 | verify(mock).someMethod(anyInt(), anyString(), eq("third argument")); |
像 anyObject(), eq() 这样的匹配器函数不会返回匹配器。它们会在内部将匹配器记录到一个栈当中,并且返回一个假的值,通常为null。这样的实现是由于被Java编译器强加的静态类型安全
。结果就是你不能在验证或者测试桩函数之外使用 anyObject(), eq() 函数。
1 | mockedList.add("once"); |
verify 函数默认验证的是执行了 times(1),也就是某个测试函数是否执行了 1 次。因此,times(1) 通常被省略了。
1 | doThrow(new RuntimeException()).when(mockedList).clear(); |
关于 doThrow | doAnswer 等函数的信息请阅读第 12 节。
1 | // A. 验证 mock 一个对象的函数执行顺序 |
验证执行顺序是非常灵活的:你不需要一个一个的验证所有交互,只需要验证你感兴趣的对象即可。
另外,你可以仅通过那些需要验证顺序的 mock 对象来创建 InOrder 对象。
1 | // 使用 Mock 对象 |
1 | // 使用 mock |
一些用户可能会在频繁地使用 verifyNoMoreInteractions()
,甚至在每个测试函数中都用。但是 verifyNoMoreInteractions()
并不建议在每个测试函数中都使用。verifyNoMoreInteractions()
在交互测试套件中只是一个便利的验证,它的作用是当你需要验证是否存在冗余调用时。滥用它将导致测试代码的可维护性降低。
never()
是一种更为明显且易于理解的形式。
1 | public class ArticleManagerTest { |
注意!下面这句代码需要在运行测试函数之前被调用,一般放到测试类的基类或者 test runner 中:
1 | MockitoAnnotations.initMocks(testClass); |
你可以使用内置的 runner: [MockitoJUnitRunner] [runner] 或者一个 rule : MockitoRule。
对于 JUnit5 测试,在 45 节有描述。
关于 mock 注解的更多信息可以阅读 MockitoAnnotations文档。
有时我们需要为同一个函数调用的不同的返回值或异常做测试桩。典型的运用就是使用 mock 迭代器。
原始版本的 Mockito 并没有这个特性,例如,可以使用 Iterable 或者简单的集合来替换迭代器。这些方法提供了更原始的方式。
在一些场景中为连续的调用做测试桩会很有用。示例如下 :
1 | when(mock.someMethod("some arg")) |
另外,连续调用的另一种更简短的版本 :
1 | // 第一次调用时返回 "one",第二次返回 "two",第三次返回 "three" |
Allows stubbing with generic Answer interface.
运行为泛型接口 Answer 打桩。
在最初的Mockito里也没有这个具有争议性的特性。我们建议使用thenReturn() 或thenThrow()来打桩。这两种方法足够用于测试或者测试驱动开发。
1 | when(mock.someMethod(anyString())).thenAnswer(new Answer() { |
通过when(Object)
为无返回值的函数打桩有不同的方法,因为编译器不喜欢void函数在括号内…
使用doThrow(Throwable)
替换stubVoid(Object)
来为void函数打桩是为了与doAnswer()
等函数族保持一致性。
当你想为void函数打桩时使用含有一个exception 参数的doAnswer()
:
1 | doThrow(new RuntimeException()).when(mockedList).clear(); |
当你调用doThrow()
, doAnswer()
, doNothing()
, doReturn()
and doCallRealMethod()
这些函数时可以在适当的位置调用when()
函数. 当你需要下面这些功能时这是必须的:
但是在调用when()
函数时你可以选择是否调用这些上述这些函数。
阅读更多关于这些方法的信息:
你可以为真实对象创建一个监控(spy)对象。当你使用这个spy对象时真实的对象也会也调用,除非它的函数被stub了。尽量少使用spy对象,使用时也需要小心形式,例如spy对象可以用来处理遗留代码。
监控一个真实的对象可以与“局部mock对象”概念结合起来。在1.8之前,mockito的监控功能并不是真正的局部mock对象。原因是我们认为局部mock对象的实现方式并不好,在某些时候我发现一些使用局部mock对象的合法用例。(第三方接口、临时重构遗留代码,完整的文章在这里 )
1 | List list = new LinkedList(); |
理解监控真实对象非常重要!
有时,在监控对象上使用when(Object)
来进行打桩是不可能或者不切实际的。因此,当使用监控对象时请考虑doReturn|Answer|Throw()
函数族来进行打桩。例如 :
1 | List list = new LinkedList(); |
Mockito并不会为真实对象代理函数调用,实际上它会拷贝真实对象。因此如果你保留了真实对象并且与之交互,不要期望从监控对象得到正确的结果。当你在监控对象上调用一个没有被stub的函数时并不会调用真实对象的对应函数,你不会在真实对象上看到任何效果。
因此结论就是 : 当你在监控一个真实对象时,你想在stub这个真实对象的函数,那么就是在自找麻烦。或者你根本不应该验证这些函数。
你可以指定策略来创建mock对象的返回值。这是一个高级特性,通常来说,你不需要写这样的测试。然后,它对于遗留系统来说是很有用处的。当你不需要为函数调用打桩时你可以指定一个默认的answer。
1 | Foo mock = mock(Foo.class, Mockito.RETURNS_SMART_NULLS); |
关于RETURNS_SMART_NULLS更多的信息请查看 :
RETURNS_SMART_NULLS文档 。
Mockito以java代码风格的形式来验证参数值 : 即通过使用equals()
函数。这也是我们推荐用于参数匹配的方式,因为这样会使得测试代码更简单、简洁。在某些情况下,当验证交互之后要检测真实的参数值时这将变得有用。例如 :
1 | ArgumentCaptor<Person> argument = ArgumentCaptor.forClass(Person.class); |
警告 : 我们建议使用没有测试桩的ArgumentCaptor来验证,因为使用含有测试桩的ArgumentCaptor会降低测试代码的可读性,因为captor是在断言代码块之外创建的。另一个好处是它可以降低本地化的缺点,因为如果测试桩函数没有被调用,那么参数就不会被捕获。总之,ArgumentCaptor与自定义的参数匹配器相关(可以查看ArgumentMatcher类的文档 )。这两种技术都能用于检测外部传递到Mock对象的参数。然而,使用ArgumentCaptor在以下的情况下更合适 :
自定义参数匹配器相关的资料你可以参考ArgumentMatcher文档。
在内部通过邮件进行了无数争辩和讨论后,最终 Mockito 决定支持部分测试,早前我们不支持是因为我们认为部分测试会让代码变得糟糕。然而,我们发现了部分测试真正合理的用法。详情点这
在 Mockito 1.8 之前,spy() 方法并不会产生真正的部分测试,而这无疑会让一些开发者困惑。更详细的内容可以看:这里 或 Java 文档)
1 | //you can create partial mock with spy() method: |
一如既往,你会去读部分测试的警告部分:面向对象编程通过将抽象的复杂度拆分为一个个独立,精确的 SRPy 对象中,降低了抽象处理的复杂度。那部分测试是怎么遵循这个规范的呢?事实上部分测试并没有遵循这个规范……部分测试通常意味着抽象的复杂度被移动到同一个对象的不同方法中,在大多数情况下,这不会是你想要的应用架构方式。
然而,在一些罕见的情况下部分测试才会是易用的:处理不能轻易修改的代码(第三方接口,临时重构的遗留代码等等)。然而,为了新的,测试驱动和架构优秀的代码,我是不会使用部分测试的。
聪明的 Mockito 使用者很少会用到这个特性,因为他们知道这是出现糟糕测试单元的信号。通常情况下你不会需要重设你的测试单元,只需要为每一个测试方法重新创建一个测试单元就可以了。
如果你真的想通过 reset() 方法满足某些需求的话,请考虑实现简单,小而且专注于测试方法而不是冗长,精确的测试。首先可能出现的代码异味就是测试方法中间那的 reset() 方法。这可能意味着你已经过度测试了。请遵循测试方法的呢喃:请让我们小,而且专注于单一的行为上。在 Mockito 邮件列表中就有好几个讨论是和这个有关的。
添加 reset() 方法的唯一原因就是让它能与容器注入的测试单元协作。详情看 issue 55 或 FAQ。
别自己给自己找麻烦,reset() 方法在测试方法的中间确实是代码异味。
1 | List mock = mock(List.class); |
首先,如果出现了任何问题,我建议你先看 Mockito FAQ。
任何你提的问题都会被提交到 Mockito 的邮件列表中。
然后你应该知道 Mockito 会验证你是否始终以正确的方式使用它,对此有疑惑的话不妨看看 validateMockitoUsage()) 的文档说明。
行为驱动开发实现测试单元的模式将 //given //when //then comments 视作测试方法的基础,这也是我们实现单元测试时被建议做的!
问题是当信息没有很好地与 //given //when //then comments 交互时,扮演规范角色的测试桩 API 就会出现问题。这是因为测试桩属于给定测试单元的组件,而且不是任何测试的组件。因此 BDDMockito 类介绍了一个别名,使你的测试桩方法调用 BDDMockito.given(Object)) 方法。现在它可以很好地和给定的 BDD 模式的测试单元组件进行交互。
1 | import static org.mockito.BDDMockito.*; |
模拟对象可以被序列化。有了这个特性你就可以在依赖被序列化的情况下使用模拟对象了。
警告:这个特性很少在单元测试中被使用。
To create serializable mock use MockSettings.serializable()):
这个特性通过 BDD 拥有不可考外部依赖的特性的具体用例实现,来自外部依赖的 Web 环境和对象会被序列化,然后在不同层之间被传递。
1 | List serializableMock = mock(List.class, withSettings().serializable()); |
The mock can be serialized assuming all the normal serialization requirements are met by the class.
模拟对象能被序列化假设所有普通的序列化要求都被类满足了。
让一个真实的侦查对象可序列化需要多一些努力,因为 spy(…) 方法没有接收 MockSettings 的重载版本。不过不用担心,你几乎不可能用到这。
1 | List<Object> list = new ArrayList<Object>(); |
V1.8.3 带来的新注解在某些场景下可能会很实用
@Captor 简化 ArgumentCaptor 的创建 - 当需要捕获的参数是一个令人讨厌的通用类,而且你想避免编译时警告。
@Spy - 你可以用它代替 spy(Object) 方法)
@InjectMocks - 自动将模拟对象或侦查域注入到被测试对象中。需要注意的是 @InjectMocks 也能与 @Spy 一起使用,这就意味着 Mockito 会注入模拟对象到测试的部分测试中。它的复杂度也是你应该使用部分测试原因。
所有新的注解仅仅在 MockitoAnnotations.initMocks(Object)) 方法中被处理,就像你在 built-in runner 中使用的 @Mock 注解:MockitoJUnitRunner 或 规范: MockitoRule.
允许带有暂停的验证。这使得一个验证去等待一段特定的时间,以获得想要的交互而不是如果还没有发生事件就带来的立即失败。在并发条件下的测试这会很有用。
感觉起来这个特性应该很少被使用 - 指出更好的测试多线程系统的方法。
还没有实现去和 InOrder 验证协作。
例子:
1 | //passes when someMethod() is called within given time span |
Mockito 现在会通过注入构造方法、setter 或域注入尽可能初始化带有 @Spy 和 @InjectMocks 注解的域或方法。
为了利用这一点特性,你需要使用 MockitoAnnotations.initMocks(Object)), MockitoJUnitRunner 或 MockitoRule。
为了 InjectMocks 请在 Java 文档中了解更多可用的技巧和注入的规范
1 | //instead: |
Mockito 现在允许你在使用测试桩时创建模拟对象。基本上,它允许在一行代码中创建一个测试桩,这对保持代码的整洁很有用。举例来说,有些乏味的测试桩会被创建,并在测试初始化域时被打入,例如:
1 | public class CarTest { |
Mockito 现在允许为了验证无视测试桩。在与 verifyNoMoreInteractions() 方法或验证 inOrder() 方法耦合时,有些时候会很有用。帮助避免繁琐的打入测试桩调用验证 - 显然我们不会对验证测试桩感兴趣。
警告,ignoreStubs() 可能会导致 verifyNoMoreInteractions(ignoreStubs(…)) 的过度使用。谨记在心,Mockito 没有推荐用 verifyNoMoreInteractions()) 方法连续地施用于每一个测试中,原因在 Java 文档中有。
一些例子:
1 | verify(mock).foo(); |
更好的例子和更多的细节都可以在 Java 文档的 ignoreStubs(Object…)) 部分看到。
为了区别一个对象是模拟对象还是侦查对象:
1 | Mockito.mockingDetails(someObject).isMock(); |
MockingDetails.isMock()) 和 MockingDetails.isSpy()) 方法都会返回一个布尔值。因为一个侦查对象只是模拟对象的一种变种,所以 isMock() 方法在对象是侦查对象是会返回 true。在之后的 Mockito 版本中 MockingDetails 会变得更健壮,并提供其他与模拟对象相关的有用信息,例如:调用,测试桩信息,等等……
当使用常规的 spy API 去 mock 或者 spy 一个对象很困难时可以用 delegate 来 spy 或者 mock 对象的某一部分。
从 Mockito 的 1.10.11 版本开始, delegate 有可能和 mock 的类型相同也可能不同。如果不是同一类型,
delegate 类型需要提供一个匹配方法否则就会抛出一个异常。下面是关于这个特性的一些用例:
和常规 spy 的不同:
标准的 spy (spy(Object)) 包含被 spy 实例的所有状态信息,方法在 spy 对象上被调用。被 spy 的对象只在 mock
创建时被用来拷贝状态信息。如果你通过标准 spy 调用一个方法,这个 spy 会调用其内部的其他方法记录这次操作,
以便后面验证使用。等效于存根 (stubbed)操作。
mock delegates 只是简单的把所有方法委托给 delegate。delegate 一直被当成它代理的方法使用。如果你
从一个 mock 调用它被委托的方法,它会调用其内部方法,这些调用不会被记录,stubbing 在这里也不会生效。
Mock 的 delegates 相对于标准的 spy 来说功能弱了很多,不过在标准 spy 不能被创建的时候很有用。
更多信息可以看这里 AdditionalAnswers.delegatesTo(Object).
为了满足用户的需求和 Android 平台使用。Mockito 现在提供一个扩展点,允许替换代理生成引擎。默认情况下,Mockito 使用 cglib 创建动态代理。
这个扩展点是为想要扩展 Mockito 功能的高级用户准备的。比如,我们现在就可以在 dexmaker 的帮助下使用 Mockito
测试 Android。
更多的细节,原因和示例请看 MockMaker 的文档。
开启 Behavior Driven Development (BDD) 风格的验证可以通过 BBD 的关键词 then 开始验证。
1 | given(dog.bark()).willReturn(2); |
更多信息请查阅 BDDMockito.then(Object) .
现在可以方便的 spy 一个抽象类。注意,过度使用 spy 或许意味着代码的设计上有问题。(see spy(Object)).
之前,spying 只可以用在实例对象上。而现在新的 API 可以在创建一个 mock 实例时使用构造函数。这对 mock
一个抽象类来说是很重要的,这样使用者就不必再提供一个抽象类的实例了。目前的话只支持无参构造函数,
如果你认为这样还不够的话欢迎向我们反馈。
1 | //convenience API, new overloaded spy() method: |
更多信息请见 MockSettings.useConstructor() .
Mockito 通过 classloader 引入序列化。和其他形式的序列化一样,所有 mock 层的对象都要被序列化,
包括 answers。因为序列化模式需要大量的工作,所以这是一个可选择设置。
1 | // 常规的 serialization |
更多信息请查看 MockSettings.serializable(SerializableMode).
Deep stubbing 现在可以更好的查找类的泛型信息。这就意味着像这样的类
不必去 mock 它的行为就可以使用。
1 | class Lines extends List<Line> { |
请注意,大多数情况下 mock 返回一个 mock 对象是错误的。
Mockito 现在提供一个 JUnit rule。目前为止,有两种方法可以初始化 fields ,使用 Mockito 提供的注解比如
@Mock, @Spy, @InjectMocks 等等。
现在你可以选择使用一个 rule:
1 |
|
更多信息到这里查看 MockitoJUnit.rule().
这是一个测试特性,可以控制一个 mockito-plugin 开启或者关闭。详情请查看 PluginSwitch
###35. 自定义验证失败信息 (Since 2.0.0)
允许声明一个在验证失败时输出的自定义消息
示例:
1 | // will print a custom message on verification failure |
即 HTTP 下加入 SSL 层,HTTPS 的安全基础是 SSL,因此加密的详细内容就需要 SSL。
本文介绍如何使用 Let’s Encrypt 签发免费证书实现 Apache HTTPS。
Let’s Encrypt 是一个由非营利性组织 互联网安全研究小组(ISRG)提供的免费、自动化和开放的证书颁发机构(CA)。
简单的说,借助 Let’s Encrypt 颁发的证书可以为我们的网站免费启用 HTTPS(SSL/TLS)。
Let’s Encrypt 免费证书的签发/续签都是脚本自动化的,官方提供了几种证书的申请方式方法,点击 此处 快速浏览。
官方推荐使用 Certbot 客户端来签发证书,可以帮我们获取免费的 Let’s Encrypt 证书。Certbot 支持所有 Unix 内核的操作系统。
1 | $ sudo yum install certbot -y |
安装完成后,可以先通过运行 certbot 进行测试,如无问题则继续下一步。
查看端口是否被占用,有其他服务(例如 Nginx 或者 Apache)占用了 80 端口和 443 端口,就必须先停止这些服务,在证书生成完毕后再启用。1
2$ netstat -tunlp | grep :443
$ netstat -tunlp | grep :80
否则在执行下一步生成证书时会报 Problem binding to port 80: Could not bind to IPv4 or IPv6.
。
接着继续生成证书1
$ certbot certonly --standalone -d www.fcj.one
证书生成完毕后,可以在 /etc/letsencrypt/live/
目录下看到对应域名的文件夹找到证书。
这时候我们的第一步生成证书已经完成了,接下来就是配置 Apache 服务器,启用 HTTPS。
打开 Apache 安装目录下 conf 目录中的 httpd.conf 文件,找到以下内容并去掉 ‘#’。
1 | LoadModule ssl_module modules/mod_ssl.so |
如果不存在可手动添加,前提是已经安装 ssl 模块:yum install mod_ssl
。
然后在 /etc/httpd/conf/httpd.conf
添加以下配置:
1 | <VirtualHost *:443> |
重启 Apache 服务器:service httpd restart
Let’s Encrypt 提供的免费证书只有 90 天的有效期,必须在证书到期之前,重新获取这些证书,
certbot 给我们提供了一个很方便的命令 certbot renew
。 通过这个命令,会自动检查系统内的证书,并且自动更新这些证书。
这里直接用 crontab 设置每月定时更新即可。crontab 用法参考
1 | 0 4 * */2 * certbot renew --pre-hook "systemctl stop httpd" --post-hook "systemctl start httpd" |
每隔两个月凌晨四点进行证书更新,并先行停止 httpd 服务,之后再开启。
如果你设置了防火墙,请在防火墙中开启 443 端口。
如果是 firewalld 的可以使用下面命令:1
2
3sudo firewall-cmd --add-service=http
sudo firewall-cmd --add-service=https
sudo firewall-cmd --runtime-to-permanent
如果是iptables的可以使用下面命令1
2sudo iptables -I INPUT -p tcp -m tcp --dport 80 -j ACCEPT
sudo iptables -I INPUT -p tcp -m tcp --dport 443 -j ACCEPT
CI 即 Continuous Integration,译为持续集成。持续集成指的是只要代码有变更,都会触发 CI 服务器自动对项目运行构建和测试,反馈结果,最终甚至自动部署到测试或生产环境。
Travis CI 是一个提供持续集成功能的平台,在 Github 上,可以添加 Travis CI 应用,当有 code push 时候,会推送通知到 Travis,根据设置的脚本运行指定任务。和 Jenkins 不同,Travis CI 是由官方远端提供服务器而无需自己搭建,且只支持 Github,不支持其他代码托管服务。
目前有两个站点:
两个站点只能看到各自的项目,不能通用。
通常更新一篇 Hexo 博客文章,基本流程是:
hexo g && gulp
hexo s --debug
http://localhost:4000/
, 浏览生成文章hexo d
下面主要介绍如何利用 Travis CI 自动完成第 3-6 步.
直接进入 travis-ci.org 官网,用 Github 账号授权登录。在右上角账户头像处点击进入 Settings,在 Repositories Tab 页点击 Sync now 同步你的 Github 项目。选中 Hexo 博客项目将默认的 off 改变为 on 开启项目的持续集成。
当 push 到 github 仓库时 travis 自动读取一个名为 .travis.yml
配置文件来完成 hexo 的 generate 和 deploy,但是 deploy 需要 VPS 和 Github 的 ssh 写权限,所以需要将 ssh key 上传到项目中,为方便自动化部署,本地系统、Github 和 VPS 上都使用同一套 ssh 公钥和秘钥 (下文的 id_rsa 和 id_rsa.pub)。因为 github 的项目是公开的,需要将 ssh key 加密放到项目中,travis 运行时再解密生成 key,保证秘钥安全性。
travis 加密命令是要通过 gem 安装的,gem 依赖 ruby 环境,请确保 ruby 已经安装。gem 在国内不太好访问,建议在 vps 上博客的项目目录里(没有就 clone一个,因为下面的 –add 会修改 .travis.yml )安装执行下面的命令:
1 | gem install travis |
在 Travis CI 控制台里,点击 build:passing 图标,选择 Markdown 样式,粘贴到项目 Readme 里即可。
1 |
|
配置完成后,以后当你本地编辑完 md 文件后,只需要运行 git push,推送代码到 Github 就会触发 Travis CI 自动生成 build,部署新的博客内容。
因为 travis-ci 默认只添加了 github.com, gist.github.com 和 ssh.github.com 为 known_hosts,hexo d 执行时会提示是否添加地址到 known_hosts,但是 travis-ci 里不能输入确认,所以需要将服务器的 IP 和端口添加到 known_hosts1
2addons:
ssh_known_hosts: ssh_known_hosts: host-ip:ssh-port
1 | $ npm install gulp -g |
1 | var gulp = require('gulp'); |
1 | git clone repository_url |
1 | $ cd existing_folder |
查看所有分支
1 | $ git branch # 查看所有分支,当前分支前面会标一个 * 号 |
创建新分支
1 | $ git checkout -b dev # 相当于以下两条命令 |
切换分支
1 | $ git checkout dev # 切换到 dev 分支 |
合并分支
1 | $ git checkout master # 切换到主分支 |
推送分支
1 | $ git push origin dev |
删除分支
1 | $ git brach -d dev # 删除本地 dev 分支 |
配置上游项目地址。将你 fork 的项目地址配置到自己的项目上。比如我 fork 了一个项目,原项目是 theme-next/hexo-theme-next.git,我的项目就是 ChangingFond/hexo-theme-next.git。使用以下命令来配置。
1 | $ git remote add upstream https://github.com/theme-next/hexo-theme-next.git |
查看一下配置状况,上游项目的地址已经被加进来了。
1 | $ git remote -v |
获取上游项目更新。使用 fetch 命令更新,fetch 后会被存储在一个本地分支 upstream/master 上。
1 | $ git fetch upstream |
合并到本地分支。切换到 master 分支,合并 upstream/master 分支。
1 | git merge upstream/master |
提交推送。根据自己情况提交推送自己项目的代码。
1 | $ git push origin master |
由于项目已经配置了上游项目的地址,所以如果 fork 的项目再次更新,重复步骤 3、4、5即可。
1 | $ git fetch --all |
1 | ======================== Elasticsearch Configuration ========================= |
ps -ef|grep cron
。cron读取一个或多个配置文件,这些配置文件中包含了命令行及其调用时间。cron的配置文件称为 crontab,是 cron table 的简写。CentOS 7下安装1
2yum install vixie-cron
yum install crontabs
启动服务1
service crond start
开机自动启动1
chkconfig --level 35 crond on
查看是否已加入开机自启动:使用 chkconfig | grep crond
看在2 3 4 5级别是不是on
crontab默认就是开机启动的,普通用户要有sudo的权限才能设置开机启动
其他命令1
2
3
4 service crond stop //关闭服务
service crond restart //重启服务
service crond reload //重新载入配置
service crond status //查看服务状态
crontab权限问题到/var/adm/cron/
下查看文件cron.allow和cron.deny是否存在
用法如下:
1、如果两个文件都不存在,则只有root用户才能使用crontab命令。
2、如果cron.allow存在但cron.deny不存在,则只有列在cron.allow文件里的用户才能使用crontab命令,如果root用户也不在里面,则root用户也不能使用crontab。
3、如果cron.allow不存在, cron.deny存在,则只有列在cron.deny文件里面的用户不能使用crontab命令,其它用户都能使用。
4、如果两个文件都存在,则列在cron.allow文件中而且没有列在cron.deny中的用户可以使用crontab,如果两个文件中都有同一个用户,以cron.allow文件里面是否有该用户为准,如果cron.allow中有该用户,则可以使用crontab命令。
CentOS 7 中默认普通用户没有 crontab 权限 ,要想放开普通用户的 crontab 权限可以编辑/var/adm/cron/cron.deny
crontab命令用于安装、删除或者列出用于驱动cron后台进程的表格。用户把需要执行的命令序列放到crontab文件中以获得执行。每个用户都可以有自己的crontab文件。/var/spool/cron
下的crontab文件不可以直接创建或者直接修改。该crontab文件是通过crontab命令创建的。
用ls /etc/cron
然后敲两下TAB,可以看到相关文件及目录。
cron.d/ cron.daily/ cron.hourly/ cron.monthly/ crontab cron.weekly/
可以编辑crontab文件,来创建计划任务。
而以daily,hourly,weekly,monthly后缀的目录下分别存放每天,每月,每周,每月执行的任务。
其中存放的就是Shell脚本文件,权限755。我们把要执行的任务写成Shell脚本丢进行相应的目录就可以了。
而不规则周期的计划任务放在corn.d目录下面,可以看做是crontab文件的补充。
注意 crontab 是分用户的,以谁登录就会编辑到哪个用户的 crontab,必须在拥有 cron 权限的用户下编辑 crontab。
1 | crontab -e : 编辑某个用户的crontab文件内容。如果不指定用户,则表示编辑当前用户的crontab文件 |
“*”代表所有的取值范围内的数字。特别要注意哦!
“/“代表每的意思,如”*/5”表示每5个单位
“-“代表从某个数字到某个数字
“,”分散的数字
1 | 每晚21:30重启apache |
一级域名(abc.cn)也称作顶级域名,申请一级域名一般都需要收费。二级域名(blog.abc.cn)是对一级域名的延伸,www其实也是二级域名的一种,只不过大家平时习惯用 www 作为网站的主域名。通过DNS解析服务商,可以在域名管理页面为主域名添加一条解析记录,比如要添加一个 blog 开头的二级域名。
主机记录 | 记录类型 | 线路类型 | 记录值 | MX优先级 | TTL |
---|---|---|---|---|---|
blog | A | 默认 | 45.54.23.1 | - | 600 |
设置完成后,ping blog.abc.com,如果返回的ip地址都是服务器IP,说明域名解析已经成功。
方法一:
现将创建的二级域名映射到服务器的某个网站目录下,需要配置apache的http.conf文件,vi /etc/httpd/conf/httpd.conf
,在文件中增加以下代码:
每一个二级域名对应一个 VirtualHost 标签,有多少二级域名,就需要多少个 VirtualHost 标签。
1 | NameVirtualHost *:80 |
方法二:
1.将httpd.conf配置文件的两行取消注释;1
2DocumentRoot "/var/www/html"
ServerAdmin you@example.com
2.然后取消Virtual hosts下面的Include注释,引入虚拟服务器配置文件;1
2# Virtual hosts
Include conf/extra/httpd-vhosts.conf
3.在配置文件conf/extra/httpd-vhosts.conf
(若文件不存在则创建)同样加入上述配置内容
最后重启apache服务器,service httpd restart
配置虚拟主机的选项里面,可以出现的参数很多,但最少必须定义DocumentRoot和ServerName。附各个参数含义说明
ServerAdmin 管理员邮箱
DocumentRoot 所需指向路径
ServerName 域名名称
ServerAlias 域名别名 可要可不要
ErrorLog 错误日志
CustomLog 访问日志
网页可以是自己编写的,也可以是别人现成的源码。
网页编写完成后,在 Hexo\source 目录下创建一个文件夹,文件夹名称任意,将 Html 文件放置于此文件夹,并重命名为 index.html 。
在html文件中添加跳过渲染指令:
用编辑器打开 Hexo\source 创建的文件夹中的 index.html 文件,在开头添加如下代码即可
1 |
|
添加该指令后,执行 hexo g 命令时便会跳过该 index.html 文件,使得 index.html 不受当前 hexo 主题影响,完全是一个独立的网页。
如果网页引用了 css 或 js ,这些 css 和 js 必须使用外链。
如果引用图片,可以在网页目录下建立 img 文件夹,可以直接引用图片,不必再去创建外链。
使用编辑器打开 Hexo 目录下的_config.yml 文件,找到 skip_render
skip_render 一般有以下四种常用参数:
跳过source目录下的 test.html: skip_render: test.html
跳过source目录下 test 文件夹内所有文件:skip_render: test/*
跳过source目录下 test 文件夹内所有文件包括子文件夹以及子文件夹内的文件:skip_render: test/**
跳过多个路径:
1 | skip_render: |
对格式要求严格,注意填写参数时的格式,添加完成后便不会渲染指定文件/文件夹。
如果网页引用了 css 或 js ,并将整个网页目录设置为跳过渲染,则不必再为 css 和 js 创建外链,可以直接引用。
]]>Last Updated: Apr 26, 2019
generated with DocToc
Method | backbone | test size | Market1501 | CUHK03 (detected) | CUHK03 (detected/new) | CUHK03 (labeled/new) | CUHK-SYSU | DukeMTMC-reID | MARS |
---|---|---|---|---|---|---|---|---|---|
rank1 / mAP | rank1/ 5 / 10 | rank1 / mAP | rank1 / mAP | rank1 / mAP | rank1 / mAP | ||||
AlignedReID | ResNet50-X | 92.6 / 82.3 | 91.9 / 98.7 / 99.4 | 86.8 / 79.1 | 95.3 / 93.7 | ||||
AlignedReID (RK) | 94.0 / 91.2 | 96.1 / 99.5 / 99.6 | 87.5 / 85.6 | ||||||
Deep-Person(SQ) | ResNet-50 | 256×128 | 92.31 / 79.58 | 89.4 / 98.2 / 99.1 | 80.90 / 64.80 | ||||
Deep-Person(MQ) | ResNet-50 | 256×128 | 94.48 / 85.09 | ||||||
PCB(SQ) | ResNet-50 | 384x128 | 92.4 / 77.3 | 61.3 / 54.2 | 81.9 / 65.3 | ||||
PCB+RPP(SQ) | ResNet-50 | 384x128 | 93.8 / 81.6 | 63.7 / 57.5 | 83.3 / 69.2 | ||||
PN-GAN (SQ) | ResNet-50 | 89.43 / 72.58 | 79.76/ 96.24/ 98.56 | 73.58 / 53.20 | |||||
PN-GAN (MQ) | ResNet-50 | 95.90 / 91.37 | |||||||
MGN (SQ) | ResNet-50 | 95.7 / 86.9 | 66.8 / 66.0 | 68.0 / 67.4 | 88.7 / 78.4 | ||||
MGN (MQ) | ResNet-50 | 96.9 / 90.7 | |||||||
MGN (SQ+RK) | ResNet-50 | 96.6 / 94.2 | |||||||
MGN (MQ+RK) | ResNet-50 | 97.1 / 95.9 | |||||||
HPM(SQ) | ResNet-50 | 384x128 | 94.2 / 82.7 | 63.1 / 57.5 | 86.6 / 74.3 | ||||
HPM+HRE(SQ) | ResNet-50 | 384x128 | 93.9 / 83.1 | 63.2 / 59.7 | 86.3 / 74.5 | - | |||
SphereReID | ResNet-50 | 288×144 | 94.4 / 83.6 | 93.1 / 98.7 / 99.4 | 63.2 / 59.7 | 95.4 / 93.9 | 83.9 / 68.5 | - | |
Auto-ReID | 384x128 | 94.5 / 85.1 | 73.3 / 69.3 | 77.9 / 73.0 | 88.5 / 75.1 | - |
DeepReID: Deep Filter Pairing Neural Network for Person Re-Identification
intro: CVPR 2014
paper: http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Li_DeepReID_Deep_Filter_2014_CVPR_paper.pdf
An Improved Deep Learning Architecture for Person Re-Identification
intro: CVPR 2015
paper: http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Ahmed_An_Improved_Deep_2015_CVPR_paper.pdf
github: https://github.com/Ning-Ding/Implementation-CVPR2015-CNN-for-ReID
Deep Ranking for Person Re-identification via Joint Representation Learning
intro: IEEE Transactions on Image Processing (TIP), 2016
arxiv: https://arxiv.org/abs/1505.06821
PersonNet: Person Re-identification with Deep Convolutional Neural Networks
arxiv: http://arxiv.org/abs/1601.07255
Learning Deep Feature Representations with Domain Guided Dropout for Person Re-identification
intro: CVPR 2016
arxiv: https://arxiv.org/abs/1604.07528
github: https://github.com/Cysu/dgd_person_reid
Person Re-Identification by Multi-Channel Parts-Based CNN with Improved Triplet Loss Function
intro: CVPR 2016
paper: http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Cheng_Person_Re-Identification_by_CVPR_2016_paper.pdf
Joint Learning of Single-image and Cross-image Representations for Person Re-identification
intro: CVPR 2016
paper: http://openaccess.thecvf.com/content_cvpr_2016/papers/Wang_Joint_Learning_of_CVPR_2016_paper.pdf
End-to-End Comparative Attention Networks for Person Re-identification
paper: https://arxiv.org/abs/1606.04404
A Multi-task Deep Network for Person Re-identification
intro: AAAI 2017
arxiv: http://arxiv.org/abs/1607.05369
A Siamese Long Short-Term Memory Architecture for Human Re-Identification
arxiv: http://arxiv.org/abs/1607.08381
Gated Siamese Convolutional Neural Network Architecture for Human Re-Identification
intro: ECCV 2016
keywords: Market1501 rank1 = 65.9%
arxiv: https://arxiv.org/abs/1607.08378
Deep Neural Networks with Inexact Matching for Person Re-Identification
intro: NIPS 2016
keywords: Normalized correlation layer, CUHK03/CUHK01/QMULGRID
paper: https://papers.nips.cc/paper/6367-deep-neural-networks-with-inexact-matching-for-person-re-identification
github: https://github.com/InnovArul/personreid_normxcorr
Person Re-identification: Past, Present and Future
paper: https://arxiv.org/abs/1610.02984
note: https://blog.csdn.net/zdh2010xyz/article/details/53741682
Deep Learning Prototype Domains for Person Re-Identification
arxiv: https://arxiv.org/abs/1610.05047
Deep Transfer Learning for Person Re-identification
arxiv: https://arxiv.org/abs/1611.05244
note: https://blog.csdn.net/shenxiaolu1984/article/details/53607268
A Discriminatively Learned CNN Embedding for Person Re-identification
intro: TOMM 2017
arxiv: https://arxiv.org/abs/1611.05666
github(official, MatConvnet): https://github.com/layumi/2016_person_re-ID
github: https://github.com/D-X-Y/caffe-reid
Person Re-Identification via Recurrent Feature Aggregation
intro: ECCV 2016
keywords: recurrent feature aggregation network (RFA-Net)
arxiv: https://arxiv.org/abs/1701.06351
code: https://sites.google.com/site/yanyichao91sjtu/
github(official): https://github.com/daodaofr/caffe-re-id
Structured Deep Hashing with Convolutional Neural Networks for Fast Person Re-identification
arxiv: https://arxiv.org/abs/1702.04179
SVDNet for Pedestrian Retrieval
intro: ICCV 2017 spotlight
intro: On the Market-1501 dataset, rank-1 accuracy is improved from 55.2% to 80.5% for CaffeNet,
and from 73.8% to 83.1% for ResNet-50
arxiv: https://arxiv.org/abs/1703.05693
github: https://github.com/syfafterzy/SVDNet-for-Pedestrian-Retrieval
In Defense of the Triplet Loss for Person Re-Identification
arxiv: https://arxiv.org/abs/1703.07737
github(Theano): https://github.com/VisualComputingInstitute/triplet-reid
Beyond triplet loss: a deep quadruplet network for person re-identification
intro: CVPR 2017
arxiv: https://arxiv.org/abs/1704.01719
ppaper: http://cvip.computing.dundee.ac.uk/papers/Chen_CVPR_2017_paper.pdf
Quality Aware Network for Set to Set Recognition
intro: CVPR 2017
arxiv: https://arxiv.org/abs/1704.03373
github: https://github.com/sciencefans/Quality-Aware-Network
Learning Deep Context-aware Features over Body and Latent Parts for Person Re-identification
intro: CVPR 2017. CASIA
keywords: Multi-Scale Context-Aware Network (MSCAN)
arxiv: https://arxiv.org/abs/1710.06555
supplemental: Li_Learning_Deep_Context-Aware_2017_CVPR_supplemental.pdf
Point to Set Similarity Based Deep Feature Learning for Person Re-identification
intro: CVPR 2017
paper: http://openaccess.thecvf.com/content_cvpr_2017/papers/Zhou_Point_to_Set_CVPR_2017_paper.pdf
github(stay tuned): https://github.com/samaonline/Point-to-Set-Similarity-Based-Deep-Feature-Learning-for-Person-Re-identification
Scalable Person Re-identification on Supervised Smoothed Manifold
intro: CVPR 2017 spotlight
arxiv: https://arxiv.org/abs/1703.08359
youtube: https://www.youtube.com/watch?v=bESdJgalQrg
Attention-based Natural Language Person Retrieval
intro: CVPR 2017 Workshop (vision meets cognition)
keywords: Bidirectional Long Short Term Memory (BLSTM)
arxiv: https://arxiv.org/abs/1705.08923
Part-based Deep Hashing for Large-scale Person Re-identification
intro: IEEE Transactions on Image Processing, 2017
arxiv: https://arxiv.org/abs/1705.02145
Deep Person Re-Identification with Improved Embedding and Efficient Training
intro: IJCB 2017
arxiv: https://arxiv.org/abs/1705.03332
Towards a Principled Integration of Multi-Camera Re-Identification and Tracking through Optimal Bayes Filters
arxiv: https://arxiv.org/abs/1705.04608
github: https://github.com/VisualComputingInstitute/towards-reid-tracking
Person Re-Identification by Deep Joint Learning of Multi-Loss Classification
intro: IJCAI 2017
arxiv: https://arxiv.org/abs/1705.04724
Deep Representation Learning with Part Loss for Person Re-Identification
keywords: Part Loss Networks
arxiv: https://arxiv.org/abs/1707.00798
Pedestrian Alignment Network for Large-scale Person Re-identification
arxiv: https://arxiv.org/abs/1707.00408
github: https://github.com/layumi/Pedestrian_Alignment
Learning Efficient Image Representation for Person Re-Identification
arxiv: https://arxiv.org/abs/1707.02319
Person Re-identification Using Visual Attention
intro: ICIP 2017
arxiv: https://arxiv.org/abs/1707.07336
What-and-Where to Match: Deep Spatially Multiplicative Integration Networks for Person Re-identification
arxiv: https://arxiv.org/abs/1707.07074
Deep Feature Learning via Structured Graph Laplacian Embedding for Person Re-Identification
arxiv: https://arxiv.org/abs/1707.07791
Large Margin Learning in Set to Set Similarity Comparison for Person Re-identification
intro: IEEE Transactions on Multimedia
arxiv: https://arxiv.org/abs/1708.05512
Multi-scale Deep Learning Architectures for Person Re-identification
intro: ICCV 2017
arxiv: https://arxiv.org/abs/1709.05165
Person Re-Identification by Deep Learning Multi-Scale Representations
intro: ICCV 2017
keywords: Deep Pyramid Feature Learning (DPFL)
paper: Chen_Person_Re-Identification_by_ICCV_2017_paper.pdf
paper: http://www.eecs.qmul.ac.uk/~sgg/papers/ChenEtAl_ICCV2017WK_CHI.pdf
HydraPlus-Net: Attentive Deep Features for Pedestrian Analysis
intro: ICCV 2017. CUHK & SenseTime,
arxiv: https://arxiv.org/abs/1709.09930
github: https://github.com/xh-liu/HydraPlus-Net
Person Re-Identification with Vision and Language
arxiv: https://arxiv.org/abs/1710.01202
Margin Sample Mining Loss: A Deep Learning Based Method for Person Re-identification
arxiv: https://arxiv.org/abs/1710.00478
Pseudo-positive regularization for deep person re-identification
arxiv: https://arxiv.org/abs/1711.06500
Let Features Decide for Themselves: Feature Mask Network for Person Re-identification
keywords: Feature Mask Network (FMN)
arxiv: https://arxiv.org/abs/1711.07155
AlignedReID: Surpassing Human-Level Performance in Person Re-Identification
intro: Megvii Inc & Zhejiang University
arxiv: https://arxiv.org/abs/1711.08184
evaluation website: (Market1501): http://reid-challenge.megvii.com/
evaluation website: (CUHK03): http://reid-challenge.megvii.com/cuhk03
github: https://github.com/huanghoujing/AlignedReID-Re-Production-Pytorch
Region-based Quality Estimation Network for Large-scale Person Re-identification
intro: AAAI 2018
arxiv: https://arxiv.org/abs/1711.08766
Beyond Part Models: Person Retrieval with Refined Part Pooling
keywords: Part-based Convolutional Baseline (PCB), Refined Part Pooling (RPP)
arxiv: https://arxiv.org/abs/1711.09349
Deep-Person: Learning Discriminative Deep Features for Person Re-Identification
arxiv: https://arxiv.org/abs/1711.10658
Hierarchical Cross Network for Person Re-identification
arxiv: https://arxiv.org/abs/1712.06820
Re-ID done right: towards good practices for person re-identification
arxiv: https://arxiv.org/abs/1801.05339
Triplet-based Deep Similarity Learning for Person Re-Identification
intro: ICCV Workshops 2017
arxiv: https://arxiv.org/abs/1802.03254
Group Consistent Similarity Learning via Deep CRFs for Person Re-Identification
intro: CVPR 2018 oral
paper: http://openaccess.thecvf.com/content_cvpr_2018/papers/Chen_Group_Consistent_Similarity_CVPR_2018_paper.pdf
Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification
intro: CVPR 2018
keywords: similarity preserving generative adversarial network (SPGAN), Siamese network, CycleGAN, domain adaptation
arxiv: https://arxiv.org/abs/1711.07027
Harmonious Attention Network for Person Re-Identification
intro: CVPR 2018
keywords: Harmonious Attention CNN (HA-CNN)
arxiv: https://arxiv.org/abs/1802.08122
Camera Style Adaptation for Person Re-identfication
intro: CVPR 2018
arxiv: https://arxiv.org/abs/1711.10295
github: https://github.com/zhunzhong07/CamStyle
Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification
intro: CVPR 2018
arxiv: https://arxiv.org/abs/1711.07027
Dual Attention Matching Network for Context-Aware Feature Sequence based Person Re-Identification
intro: CVPR 2018
arxiv: https://arxiv.org/abs/1803.09937
Multi-Level Factorisation Net for Person Re-Identification
intro: CVPR 2018
keywords: Multi-Level Factorisation Net (MLFN)
arxiv: https://arxiv.org/abs/1803.09132
Features for Multi-Target Multi-Camera Tracking and Re-Identification
intro: CVPR 2018
arxiv: https://arxiv.org/abs/1803.10859
Good Appearance Features for Multi-Target Multi-Camera Tracking
intro: CVPR 2018 spotlight. Duke University
keywords: adaptive weighted triplet loss, hard-identity mining
project page: http://vision.cs.duke.edu/DukeMTMC/
arxiv: https://arxiv.org/abs/1803.10859
Mask-guided Contrastive Attention Model for Person Re-Identification
intro: CVPR 2018
paper: http://openaccess.thecvf.com/content_cvpr_2018/papers/Song_Mask-Guided_Contrastive_Attention_CVPR_2018_paper.pdf
Efficient and Deep Person Re-Identification using Multi-Level Similarity
intro: CVPR 2018
arxiv: https://arxiv.org/abs/1803.11353
Person Re-identification with Cascaded Pairwise Convolutions
intro: CVPR 2018
paper: http://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_Person_Re-Identification_With_CVPR_2018_paper.pdf
Attention-Aware Compositional Network for Person Re-identification
intro: CVPR 2018
intro: Sensets Technology Limited & University of Sydney
keywords: Attention-Aware Compositional Network (AACN), Pose-guided Part Attention (PPA), Attention-aware Feature Composition (AFC)
arxiv: https://arxiv.org/abs/1805.03344
Deep Group-shuffling Random Walk for Person Re-identification
intro: CVPR 2018
paper: http://openaccess.thecvf.com/content_cvpr_2018/papers/Shen_Deep_Group-Shuffling_Random_CVPR_2018_paper.pdf
Adversarially Occluded Samples for Person Re-identification
intro: CVPR 2018
paper: http://openaccess.thecvf.com/content_cvpr_2018/papers/Huang_Adversarially_Occluded_Samples_CVPR_2018_paper.pdf
Easy Identification from Better Constraints: Multi-Shot Person Re-Identification from Reference Constraints
intro: CVPR 2018
paper: http://openaccess.thecvf.com/content_cvpr_2018/papers/Zhou_Easy_Identification_From_CVPR_2018_paper.pdf
Eliminating Background-bias for Robust Person Re-identification
intro: CVPR 2018
paper: http://openaccess.thecvf.com/content_cvpr_2018/papers/Tian_Eliminating_Background-Bias_for_CVPR_2018_paper.pdf
End-to-End Deep Kronecker-Product Matching for Person Re-identification
intro: CVPR 2018
paper: http://openaccess.thecvf.com/content_cvpr_2018/papers/Shen_End-to-End_Deep_Kronecker-Product_CVPR_2018_paper.pdf
Exploiting Transitivity for Learning Person Re-identification Models on a Budget
intro: CVPR 2018
paper: http://openaccess.thecvf.com/content_cvpr_2018/papers/Roy_Exploiting_Transitivity_for_CVPR_2018_paper.pdf
Resource Aware Person Re-identification across Multiple Resolutions
intro: CVPR 2018
paper: http://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_Resource_Aware_Person_CVPR_2018_paper.pdf
Multi-Channel Pyramid Person Matching Network for Person Re-Identification
intro: 32nd AAAI Conference on Artificial Intelligence
keywords: Multi-Channel deep convolutional Pyramid Person Matching Network (MC-PPMN)
arxiv: https://arxiv.org/abs/1803.02558
Pyramid Person Matching Network for Person Re-identification
intro: 9th Asian Conference on Machine Learning (ACML2017) JMLR Workshop and Conference Proceedings
arxiv: https://arxiv.org/abs/1803.02547
Virtual CNN Branching: Efficient Feature Ensemble for Person Re-Identification
arxiv: https://arxiv.org/abs/1803.05872
Adversarial Binary Coding for Efficient Person Re-identification
arxiv: https://arxiv.org/abs/1803.10914
Learning View-Specific Deep Networks for Person Re-Identification
intro: IEEE Transactions on image processing. Sun Yat-Sen University
keywords: cross-view Euclidean constraint (CV-EC), cross-view center loss (CV-CL)
arxiv: https://arxiv.org/abs/1803.11333
Learning Discriminative Features with Multiple Granularities for Person Re-Identification
intro: Shanghai Jiao Tong University & CloudWalk
keywords: Multiple Granularity Network (MGN)
arxiv: https://arxiv.org/abs/1804.01438
Recurrent Neural Networks for Person Re-identification Revisited
intro: Stanford University & Google AI
arxiv: https://arxiv.org/abs/1804.03281
MaskReID: A Mask Based Deep Ranking Neural Network for Person Re-identification
arxiv: https://arxiv.org/abs/1804.03864
Horizontal Pyramid Matching for Person Re-identification
intro: AAAI 2019
intro: UIUC & IBM Research & Cornell University & Stevens Institute of Technology &CloudWalk Technology
keywords: Horizontal Pyramid Matching (HPM), Horizontal Pyramid Pooling (HPP), horizontal random erasing (HRE)
arxiv: https://arxiv.org/abs/1804.05275
github: https://github.com/OasisYang/HPM
Part-Aligned Bilinear Representations for Person Re-identification
intro: Seoul National University & Microsoft Research & Max Planck Institute & University of Tubingen & JD.COM
arxiv: https://arxiv.org/abs/1804.07094
Deep Co-attention based Comparators For Relative Representation Learning in Person Re-identification
arxiv: https://arxiv.org/abs/1804.11027
Feature Affinity based Pseudo Labeling for Semi-supervised Person Re-identification
arxiv: https://arxiv.org/abs/1805.06118
Resource Aware Person Re-identification across Multiple Resolutions
intro: CVPR 2018
arxiv: https://arxiv.org/abs/1805.08805
Semantically Selective Augmentation for Deep Compact Person Re-Identification
arxiv: https://arxiv.org/abs/1806.04074
SphereReID: Deep Hypersphere Manifold Embedding for Person Re-Identification
intro: it achieves 94.4% rank-1 accuracy on Market-1501 and 83.9% rank-1 accuracy on DukeMTMC-reID
arxiv: https://arxiv.org/abs/1807.00537
Multi-task Mid-level Feature Alignment Network for Unsupervised Cross-Dataset Person Re-Identification
intro: BMVC 2018. University of Warwick & Nanyang Technological University & Charles Sturt University
arxiv: https://arxiv.org/abs/1807.01440
Discriminative Feature Learning with Foreground Attention for Person Re-Identification
arxiv: https://arxiv.org/abs/1807.01455
Part-Aligned Bilinear Representations for Person Re-identification
intro: ECCV 2018
intro: Seoul National University & Microsoft Research & Max Planck Institute & University of Tubingen & JD.COM
arxiv: https://arxiv.org/abs/1804.07094
github: https://github.com/yuminsuh/part_bilinear_reid
Mancs: A Multi-task Attentional Network with Curriculum Sampling for Person Re-identification
intro: ECCV 2018. Huazhong University of Science and Technology & Horizon Robotics Inc.
Improving Deep Visual Representation for Person Re-identification by Global and Local Image-language Association
intro: ECCV 2018
arxiv: https://arxiv.org/abs/1808.01571
Deep Sequential Multi-camera Feature Fusion for Person Re-identification
arxiv: https://arxiv.org/abs/1807.07295
Improving Deep Models of Person Re-identification for Cross-Dataset Usage
intro: AIAI 2018 (14th International Conference on Artificial Intelligence Applications and Innovations) proceeding
arxiv: https://arxiv.org/abs/1807.08526
Measuring the Temporal Behavior of Real-World Person Re-Identification
arxiv: https://arxiv.org/abs/1808.05499
Alignedreid++: Dynamically Matching Local Information for Person Re-Identification
github: https://github.com/michuanhaohao/AlignedReID
Sparse Label Smoothing for Semi-supervised Person Re-Identification
arxiv: https://arxiv.org/abs/1809.04976
github: https://github.com/jpainam/SLS_ReID
In Defense of the Classification Loss for Person Re-Identification
intro: University of Science and Technology of China & Microsoft Research Asia
arxiv: https://arxiv.org/abs/1809.05864
FD-GAN: Pose-guided Feature Distilling GAN for Robust Person Re-identification
intro: NIPS 2018
arxiv: https://arxiv.org/abs/1810.02936
github(Pytorch, official): https://github.com/yxgeee/FD-GAN
Image-to-Video Person Re-Identification by Reusing Cross-modal Embeddings
arxiv: https://arxiv.org/abs/1810.03989
Attention Driven Person Re-identification
intro: Pattern Recognition (PR)
arxiv: https://arxiv.org/abs/1810.05866
A Coarse-to-fine Pyramidal Model for Person Re-identification via Multi-Loss Dynamic Training
intro: YouTu Lab, Tencent
arxiv: https://arxiv.org/abs/1810.12193
M2M-GAN: Many-to-Many Generative Adversarial Transfer Learning for Person Re-Identification
arxiv: https://arxiv.org/abs/1811.03768
Batch Feature Erasing for Person Re-identification and Beyond
arxiv: https://arxiv.org/abs/1811.07130
github(official, Pytorch): https://github.com/daizuozhuo/batch-feature-erasing-network
Re-Identification with Consistent Attentive Siamese Networks
arxiv: https://arxiv.org/abs/1811.07487
One Shot Domain Adaptation for Person Re-Identification
arxiv: https://arxiv.org/abs/1811.10144
Parameter-Free Spatial Attention Network for Person Re-Identification
arxiv: https://arxiv.org/abs/1811.12150
Spectral Feature Transformation for Person Re-identification
intro: University of Chinese Academy of Sciences & TuSimple
arxiv: https://arxiv.org/abs/1811.11405
Identity Preserving Generative Adversarial Network for Cross-Domain Person Re-identification
arxiv: https://arxiv.org/abs/1811.11510
Dissecting Person Re-identification from the Viewpoint of Viewpoint
arxiv: https://arxiv.org/abs/1812.02162
Fast and Accurate Person Re-Identification with RMNet
intro: IOTG Computer Vision (ICV), Intel
arxiv: https://arxiv.org/abs/1812.02465
Spatial-Temporal Person Re-identification
intro: AAAI 2019
intro: Sun Yat-sen University
arxiv: https://arxiv.org/abs/1812.03282
github: https://github.com/Wanggcong/Spatial-Temporal-Re-identification
Omni-directional Feature Learning for Person Re-identification
intro: Tongji University
keywords: OIM loss
arxiv: https://arxiv.org/abs/1812.05319
Learning Incremental Triplet Margin for Person Re-identification
intro: AAAI 2019 spotlight
intro: Hikvision Research Institute
arxiv: https://arxiv.org/abs/1812.06576
Densely Semantically Aligned Person Re-Identification
intro: USTC & MSRA
arxiv: https://arxiv.org/abs/1812.08967
EANet: Enhancing Alignment for Cross-Domain Person Re-identification
intro: CRISE & CASIA & Horizon Robotics
arxiv: https://arxiv.org/abs/1812.11369
github(official, Pytorch): https://github.com/huanghoujing/EANet
blog: https://zhuanlan.zhihu.com/p/53660395
Backbone Can Not be Trained at Once: Rolling Back to Pre-trained Network for Person Re-Identification
intro: AAAI 2019
intro: Seoul National University & Samsung SDS
arxiv: https://arxiv.org/abs/1901.06140
Ensemble Feature for Person Re-Identification
keywords: EnsembleNet
arxiv: https://arxiv.org/abs/1901.05798
Adversarial Metric Attack for Person Re-identification
intro: University of Oxford & Johns Hopkins University
arxiv: https://arxiv.org/abs/1901.10650
Discovering Underlying Person Structure Pattern with Relative Local Distance for Person Re-identification
intro: SYSU
arxiv: https://arxiv.org/abs/1901.10100
github: https://github.com/Wanggcong/RLD_codes
Attributes-aided Part Detection and Refinement for Person Re-identification
arxiv: https://arxiv.org/abs/1902.10528
Bags of Tricks and A Strong Baseline for Deep Person Re-identification
arxiv: https://arxiv.org/abs/1903.07071
github: https://github.com/michuanhaohao/reid-strong-baseline
Auto-ReID: Searching for a Part-aware ConvNet for Person Re-Identification
keywords: NAS
arxiv: https://arxiv.org/abs/1903.09776
Perceive Where to Focus: Learning Visibility-aware Part-level Features for Partial Person Re-identification
intro: CVPR 2019
intro: Tsinghua University & Megvii Technology
keywords: Visibility-aware Part Model (VPM)
arxiv: https://arxiv.org/abs/1904.00537
Pedestrian re-identification based on Tree branch network with local and global learning
intro: ICME 2019 oral
arxiv: https://arxiv.org/abs/1904.00355
Invariance Matters: Exemplar Memory for Domain Adaptive Person Re-identification
intro: CVPR 2019
arxiv: https://arxiv.org/abs/1904.01990
github: https://github.com/zhunzhong07/ECN
Person Re-identification with Bias-controlled Adversarial Training
arxiv: https://arxiv.org/abs/1904.00244
Person Re-identification with Metric Learning using Privileged Information
intro: IEEE TIP
arxiv: https://arxiv.org/abs/1904.05005
Joint Discriminative and Generative Learning for Person Re-identification
intro: CVPR 2019 oral
intro: NVIDIA & University of Technology Sydney & Australian National University
arxiv: https://arxiv.org/abs/1904.07223
Joint Detection and Identification Feature Learning for Person Search
intro: CVPR 2017 Spotlight
keywords: Online Instance Matching (OIM) loss function
homepage(dataset+code):http://www.ee.cuhk.edu.hk/~xgwang/PS/dataset.html
arxiv: https://arxiv.org/abs/1604.01850
paper: http://www.ee.cuhk.edu.hk/~xgwang/PS/paper.pdf
github(official. Caffe): https://github.com/ShuangLI59/person_search
Person Re-identification in the Wild
intro: CVPR 2017 spotlight
keywords: PRW dataset
project page: http://www.liangzheng.com.cn/Project/project_prw.html
arxiv: https://arxiv.org/abs/1604.02531
github: https://github.com/liangzheng06/PRW-baseline
youtube: https://www.youtube.com/watch?v=dbOGwBITJqo
IAN: The Individual Aggregation Network for Person Search
arxiv: https://arxiv.org/abs/1705.05552
Neural Person Search Machines
intro: ICCV 2017
arxiv: https://arxiv.org/abs/1707.06777
End-to-End Detection and Re-identification Integrated Net for Person Search
keywords: I-Net
arxiv: https://arxiv.org/abs/1804.00376
Person Search via A Mask-guided Two-stream CNN Model
intro: ECCV 2018
arxiv: https://arxiv.org/abs/1807.08107
Person Search by Multi-Scale Matching
intro: ECCV 2018
keywords: Cross-Level Semantic Alignment (CLSA)
arxiv: https://arxiv.org/abs/1807.08582
Learning Context Graph for Person Search
intro: CVPR 2019
intro: Shanghai Jiao Tong University & Tencent YouTu Lab & Inception Institute of Artificial Intelligence, UAE
arxiv: https://arxiv.org/abs/1904.01830
Pose Invariant Embedding for Deep Person Re-identification
keywords: pose invariant embedding (PIE), PoseBox fusion (PBF) CNN
arixv: https://arxiv.org/abs/1701.07732
Deeply-Learned Part-Aligned Representations for Person Re-Identification
intro: ICCV 2017
arxiv: https://arxiv.org/abs/1707.07256
github(official, Caffe): https://github.com/zlmzju/part_reid
Spindle Net: Person Re-identification with Human Body Region Guided Feature Decomposition and Fusion
intro: CVPR 2017
paper: http://openaccess.thecvf.com/content_cvpr_2017/papers/Zhao_Spindle_Net_Person_CVPR_2017_paper.pdf
github: https://github.com/yokattame/SpindleNet
Pose-driven Deep Convolutional Model for Person Re-identification
intro: ICCV 2017
arxiv: https://arxiv.org/abs/1709.08325
A Pose-Sensitive Embedding for Person Re-Identification with Expanded Cross Neighborhood Re-Ranking
intro: CVPR 2018
arxiv: https://arxiv.org/abs/1711.10378
github(official): https://github.com/pse-ecn/pose-sensitive-embedding
Pose-Driven Deep Models for Person Re-Identification
intro: Masters thesis
arxiv: https://arxiv.org/abs/1803.08709
Pose Transferrable Person Re-Identification
intro: CVPR 2018
paper: http://openaccess.thecvf.com/content_cvpr_2018/papers/Liu_Pose_Transferrable_Person_CVPR_2018_paper.pdf
Person re-identification with fusion of hand-crafted and deep pose-based body region features
arxiv: https://arxiv.org/abs/1803.10630
Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in vitro
intro: ICCV 2017
arxiv: https://arxiv.org/abs/1701.07717
github(official, Matlab): https://github.com/layumi/Person-reID_GAN
github: https://github.com/qiaoguan/Person-reid-GAN-pytorch
Person Transfer GAN to Bridge Domain Gap for Person Re-Identification
intro: CVPR 2018 spotlight
intro: PTGAN
arxiv: https://arxiv.org/abs/1711.08565
github: https://github.com/JoinWei-PKU/PTGAN
Pose-Normalized Image Generation for Person Re-identification
keywords: PN-GAN
arxiv: https://arxiv.org/abs/1712.02225
github: https://github.com/naiq/PN_GAN
Multi-pseudo Regularized Label for Generated Samples in Person Re-Identification
arxiv: https://arxiv.org/abs/1801.06742
Human Semantic Parsing for Person Re-identification
intro: CVPR 2018. SPReID
arxiv: https://arxiv.org/abs/1804.00216
Improved Person Re-Identification Based on Saliency and Semantic Parsing with Deep Neural Network Models
keywords: Saliency-Semantic Parsing Re-Identification (SSP-ReID)
arxiv: https://arxiv.org/abs/1807.05618
Partial Person Re-identification
intro: ICCV 2015
arxiv: https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Zheng_Partial_Person_Re-Identification_ICCV_2015_paper.pdf
Deep Spatial Feature Reconstruction for Partial Person Re-identification: Alignment-Free Approach
intro: CVPR 2018.
keywords: Market1501 rank1=83.58%
arxiv: https://arxiv.org/abs/1801.00881
Occluded Person Re-identification
intro: ICME 2018
arxiv: https://arxiv.org/abs/1804.02792
Partial Person Re-identification with Alignment and Hallucination
intro: Imperial College London
keywords: Partial Matching Net (PMN)
arxiv: https://arxiv.org/abs/1807.09162
SCPNet: Spatial-Channel Parallelism Network for Joint Holistic and Partial Person Re-Identification
intro: ACCV 2018
arxiv: https://arxiv.org/abs/1810.06996
STNReID : Deep Convolutional Networks with Pairwise Spatial Transformer Networks for Partial Person Re-identification
intro: Zhejiang University & Megvii Inc
arxiv: https://arxiv.org/abs/1903.07072
Foreground-aware Pyramid Reconstruction for Alignment-free Occluded Person Re-identification
arxiv: https://arxiv.org/abs/1904.04975
RGB-Infrared Cross-Modality Person Re-Identification
arxiv: Wu_RGB-Infrared_Cross-Modality_Person_ICCV_2017_paper.pdf
Reinforced Temporal Attention and Split-Rate Transfer for Depth-Based Person Re-Identification
intro: ECCV 2018
arxiv: Nikolaos_Karianakis_Reinforced_Temporal_Attention_ECCV_2018_paper.pdf
A Cross-Modal Distillation Network for Person Re-identification in RGB-Depth
arxiv: https://arxiv.org/abs/1810.11641
Multi-scale Learning for Low-resolution Person Re-identification
intro: ICCV 2015
arxiv: https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Li_Multi-Scale_Learning_for_ICCV_2015_paper.pdf
Cascaded SR-GAN for Scale-Adaptive Low Resolution Person Re-identification
intro: IJCAI 2018
arxiv: https://www.ijcai.org/proceedings/2018/0541.pdf
Deep Low-Resolution Person Re-Identification
intro: AAAI 2018
keywords: Super resolution and Identity joiNt learninG (SING)
paper: http://www.eecs.qmul.ac.uk/~xiatian/papers/JiaoEtAl_2018AAAI.pdf
Deep Reinforcement Learning Attention Selection for Person Re-Identification
Identity Alignment by Noisy Pixel Removal
intro: BMVC 2017
arxiv: https://arxiv.org/abs/1707.02785
paper: http://www.eecs.qmul.ac.uk/~sgg/papers/LanEtAl_2017BMVC.pdf
Multi-Task Learning with Low Rank Attribute Embedding for Person Re-identification
intro: ICCV 2015
paper: http://legacydirs.umiacs.umd.edu/~fyang/papers/iccv15.pdf
Deep Attributes Driven Multi-Camera Person Re-identification
intro: ECCV 2016
arxiv: https://arxiv.org/abs/1605.03259
Improving Person Re-identification by Attribute and Identity Learning
arxiv: https://arxiv.org/abs/1703.07220
Person Re-identification by Deep Learning Attribute-Complementary Information
intro: CVPR 2017 workshop
paper: https://sci-hub.tw/10.1109/CVPRW.2017.186
CA3Net: Contextual-Attentional Attribute-Appearance Network for Person Re-Identification
arxiv: https://arxiv.org/abs/1811.07544
Recurrent Convolutional Network for Video-based Person Re-Identification
intro: CVPR 2016
paper: McLaughlin_Recurrent_Convolutional_Network_CVPR_2016_paper.pdf
github: https://github.com/niallmcl/Recurrent-Convolutional-Video-ReID
Deep Recurrent Convolutional Networks for Video-based Person Re-identification: An End-to-End Approach
arxiv: https://arxiv.org/abs/1606.01609
Jointly Attentive Spatial-Temporal Pooling Networks for Video-based Person Re-Identification
intro: ICCV 2017
arxiv: https://arxiv.org/abs/1708.02286
Three-Stream Convolutional Networks for Video-based Person Re-Identification
arxiv: https://arxiv.org/abs/1712.01652
LVreID: Person Re-Identification with Long Sequence Videos
arxiv: https://arxiv.org/abs/1712.07286
Multi-shot Pedestrian Re-identification via Sequential Decision Making
intro: CVPR 2018. TuSimple
keywords: reinforcement learning
arxiv: https://arxiv.org/abs/1712.07257
github: https://github.com/TuSimple/rl-multishot-reid
LVreID: Person Re-Identification with Long Sequence Videos
arxiv: https://arxiv.org/abs/1712.07286
Diversity Regularized Spatiotemporal Attention for Video-based Person Re-identification
intro: CUHK-SenseTime & Argo AI
arxiv: https://arxiv.org/abs/1803.09882
Video Person Re-identification with Competitive Snippet-similarity Aggregation and Co-attentive Snippet Embedding
intro: CVPR 2018 Poster
paper: http://openaccess.thecvf.com/content_cvpr_2018/papers/Chen_Video_Person_Re-Identification_CVPR_2018_paper.pdf
Exploit the Unknown Gradually: One-Shot Video-Based Person Re-Identification by Stepwise Learning
intro: CVPR 2018
paper: http://openaccess.thecvf.com/content_cvpr_2018/papers/Wu_Exploit_the_Unknown_CVPR_2018_paper.pdf
Revisiting Temporal Modeling for Video-based Person ReID
arxiv: https://arxiv.org/abs/1805.02104
github: https://github.com/jiyanggao/Video-Person-ReID
Video Person Re-identification by Temporal Residual Learning
arxiv: https://arxiv.org/abs/1802.07918
A Spatial and Temporal Features Mixture Model with Body Parts for Video-based Person Re-Identification
arxiv: https://arxiv.org/abs/1807.00975
Video-based Person Re-identification via 3D Convolutional Networks and Non-local Attention
intro: University of Science and Technology of China & University of Chinese Academy of Sciences
arxiv: https://arxiv.org/abs/1807.05073
Spatial-Temporal Synergic Residual Learning for Video Person Re-Identification
arxiv: https://arxiv.org/abs/1807.05799
Where-and-When to Look: Deep Siamese Attention Networks for Video-based Person Re-identification
intro: IEEE Transactions on Multimedia
arxiv: https://arxiv.org/abs/1808.01911
STA: Spatial-Temporal Attention for Large-Scale Video-based Person Re-Identification
intro: AAAI 2019
arxiv: https://arxiv.org/abs/1811.04129
Multi-scale 3D Convolution Network for Video Based Person Re-Identification
intro: AAAI 2019
arxiv: https://arxiv.org/abs/1811.07468
Deep Active Learning for Video-based Person Re-identification
arxiv: https://arxiv.org/abs/1812.05785
Spatial and Temporal Mutual Promotion for Video-based Person Re-identification
intro: AAAI 2019
arxiv: https://arxiv.org/abs/1812.10305
3D PersonVLAD: Learning Deep Global Representations for Video-based Person Re-identification
arxiv: https://arxiv.org/abs/1812.10222
SCAN: Self-and-Collaborative Attention Network for Video Person Re-identification
intro: TIP 2019
arxiv: https://arxiv.org/abs/1807.05688
GAN-based Pose-aware Regulation for Video-based Person Re-identification
intro: Heriot-Watt University & University of Edinburgh & Queen’s University Belfast & Anyvision
keywords: Weighted Fusion (WF) & Weighted-Pose Regulation (WPR)
arxiv: https://arxiv.org/abs/1903.11552
Convolutional Temporal Attention Model for Video-based Person Re-identification
intro: ICME 2019
arxiv: https://arxiv.org/abs/1904.04492
Divide and Fuse: A Re-ranking Approach for Person Re-identification
intro: BMVC 2017
arxiv: https://arxiv.org/abs/1708.04169
Re-ranking Person Re-identification with k-reciprocal Encoding
intro: CVPR 2017
arxiv: https://arxiv.org/abs/1701.08398
github: https://github.com/zhunzhong07/person-re-ranking
A Pose-Sensitive Embedding for Person Re-Identification with Expanded Cross Neighborhood Re-Ranking
intro: CVPR 2018
arxiv: https://arxiv.org/abs/1711.10378
github(official): https://github.com/pse-ecn/expanded-cross-neighborhood
Adaptive Re-ranking of Deep Feature for Person Re-identification
arxiv: https://arxiv.org/abs/1811.08561
Unsupervised Person Re-identification: Clustering and Fine-tuning
arxiv: https://arxiv.org/abs/1705.10444
github: https://github.com/hehefan/Unsupervised-Person-Re-identification-Clustering-and-Fine-tuning
Stepwise Metric Promotion for Unsupervised Video Person Re-identification
intro: ICCV 2017
paper: http://openaccess.thecvf.com/content_ICCV_2017/papers/Liu_Stepwise_Metric_Promotion_ICCV_2017_paper.pdf
github: https://github.com/lilithliu/StepwiseMetricPromotion-code
Dynamic Label Graph Matching for Unsupervised Video Re-Identification
intro: ICCV 2017
arxiv: https://arxiv.org/abs/1709.09297
github: https://github.com/mangye16/dgm_re-id
Unsupervised Cross-dataset Person Re-identification by Transfer Learning of Spatio-temporal Patterns
intro: CVPR 2018
arxiv: https://arxiv.org/abs/1803.07293
github: https://github.com/ahangchen/TFusion
blog: https://zhuanlan.zhihu.com/p/34778414
Cross-dataset Person Re-Identification Using Similarity Preserved Generative Adversarial Networks
arxiv: https://arxiv.org/abs/1806.04533
Transferable Joint Attribute-Identity Deep Learning for Unsupervised Person Re-Identification
intro: CVPR 2018
arxiv: https://arxiv.org/abs/1803.09786
Adaptation and Re-Identification Network: An Unsupervised Deep Transfer Learning Approach to Person Re-Identification
intro: CVPR 2018 workshop. National Taiwan University & Umbo Computer Vision
keywords: adaptation and re-identification network (ARN)
arxiv: https://arxiv.org/abs/1804.09347
Domain Adaptation through Synthesis for Unsupervised Person Re-identification
arxiv: https://arxiv.org/abs/1804.10094
Deep Association Learning for Unsupervised Video Person Re-identification
intro: BMVC 2018
arxiv: https://arxiv.org/abs/1808.07301
Support Neighbor Loss for Person Re-Identification
intro: ACM Multimedia (ACM MM) 2018
arxiv: https://arxiv.org/abs/1808.06030
Unsupervised Person Re-identification by Deep Learning Tracklet Association
intro: ECCV 2018 Oral
arxiv: https://arxiv.org/abs/1809.02874
Unsupervised Tracklet Person Re-Identification
intro: TPAMI 2019
arxiv: https://arxiv.org/abs/1903.00535
github: https://github.com/liminxian/DukeMTMC-SI-Tracklet
Unsupervised Person Re-identification by Deep Asymmetric Metric Embedding
intro: TPAMI
keywords: DEep Clustering-based Asymmetric MEtric Learning (DECAMEL)
arxiv: https://arxiv.org/abs/1901.10177
github: https://github.com/KovenYu/DECAMEL
Unsupervised Person Re-identification by Soft Multilabel Learning
intro: CVPR 2019 oral
intro: Sun Yat-sen University & YouTu Lab & Queen Mary University of London
keywords: MAR (MultilAbel Reference learning), soft multilabel-guided hard negative mining
project page: https://kovenyu.com/publication/2019-cvpr-mar/
arxiv: https://arxiv.org/abs/1903.06325
github(official, Pytorch): https://github.com/KovenYu/MAR
A Novel Unsupervised Camera-aware Domain Adaptation Framework for Person Re-identification
arxiv: https://arxiv.org/abs/1904.03425
Weakly Supervised Person Re-Identification
intro: CVPR 2019
keywords: multi-instance multi-label learning (MIML), Cross-View MIML (CV-MIML)
arxiv: https://arxiv.org/abs/1904.03832
Weakly Supervised Person Re-identification: Cost-effective Learning with A New Benchmark
keywords: SYSU-30k
arxiv: https://arxiv.org/abs/1904.03845
Learning Deep Neural Networks for Vehicle Re-ID with Visual-spatio-temporal Path Proposals
intro: ICCV 2017
arxiv: https://arxiv.org/abs/1708.03918
Viewpoint-Aware Attentive Multi-View Inference for Vehicle Re-Identification
intro: CVPR 2018
paper: http://openaccess.thecvf.com/content_cvpr_2018/papers/Zhou_Viewpoint-Aware_Attentive_Multi-View_CVPR_2018_paper.pdf
RAM: A Region-Aware Deep Model for Vehicle Re-Identification
intro: ICME 2018
arxiv: https://arxiv.org/abs/1806.09283
Vehicle Re-Identification in Context
intro: Pattern Recognition - 40th German Conference, (GCPR) 2018, Stuttgart
project page: https://qmul-vric.github.io/
arxiv: https://arxiv.org/abs/1809.09409
Vehicle Re-identification Using Quadruple Directional Deep Learning Features
arxiv: https://arxiv.org/abs/1811.05163
Coarse-to-fine: A RNN-based hierarchical attention model for vehicle re-identification
intro: ACCV 2018
arxiv: https://arxiv.org/abs/1812.04239
Vehicle Re-Identification: an Efficient Baseline Using Triplet Embedding
arxiv: https://arxiv.org/abs/1901.01015
A Two-Stream Siamese Neural Network for Vehicle Re-Identification by Using Non-Overlapping Cameras
intro: ICIP 2019
arxiv: https://arxiv.org/abs/1902.01496
CityFlow: A City-Scale Benchmark for Multi-Target Multi-Camera Vehicle Tracking and Re-Identification
intro: Accepted for oral presentation at CVPR 2019 with review ratings of 2 strong accepts and 1 accept (work done during an internship at NVIDIA)
arxiv: https://arxiv.org/abs/1903.09254
Vehicle Re-identification in Aerial Imagery: Dataset and Approach
intro: Northwestern Polytechnical University
arxiv: https://arxiv.org/abs/1904.01400
Deep Metric Learning for Person Re-Identification
intro: ICPR 2014
paper: http://www.cbsr.ia.ac.cn/users/zlei/papers/ICPR2014/Yi-ICPR-14.pdf
Deep Metric Learning for Practical Person Re-Identification
arxiv: https://arxiv.org/abs/1407.4979
Constrained Deep Metric Learning for Person Re-identification
arxiv: https://arxiv.org/abs/1511.07545
Embedding Deep Metric for Person Re-identication A Study Against Large Variations
intro: ECCV 2016
arxiv: https://arxiv.org/abs/1611.00137
DarkRank: Accelerating Deep Metric Learning via Cross Sample Similarities Transfer
intro: TuSimple
keywords: pedestrian re-identification
arxiv: https://arxiv.org/abs/1707.01220
Open-ReID: Open source person re-identification library in python
intro: Open-ReID is a lightweight library of person re-identification for research purpose. It aims to provide a uniform interface for different datasets, a full set of models and evaluation metrics, as well as examples to reproduce (near) state-of-the-art results.
project page: https://cysu.github.io/open-reid/
github(PyTorch): https://github.com/Cysu/open-reid
examples: https://cysu.github.io/open-reid/examples/training_id.html
benchmarks: https://cysu.github.io/open-reid/examples/benchmarks.html
caffe-PersonReID
intro: Person Re-Identification: Multi-Task Deep CNN with Triplet Loss
gtihub: https://github.com/agjayant/caffe-Person-ReID
Person_reID_baseline_pytorch
intro: Pytorch implement of Person re-identification baseline
arxiv: https://github.com/layumi/Person_reID_baseline_pytorch
deep-person-reid
intro: Pytorch implementation of deep person re-identification models.
github: https://github.com/KaiyangZhou/deep-person-reid
ReID_baseline
intro: Baseline model (with bottleneck) for person ReID (using softmax and triplet loss).
github: https://github.com/L1aoXingyu/reid_baseline
blog: https://zhuanlan.zhihu.com/p/40514536
gluon-reid
intro: A code gallery for person re-identification with mxnet-gluon, and I will reproduce many STOA algorithm.
github: https://github.com/xiaolai-sqlai/gluon-reid
DukeMTMC-reID
intro: The Person re-ID Evaluation Code for DukeMTMC-reID Dataset (Including Dataset Download)
github: https://github.com/layumi/DukeMTMC-reID_evaluation
DukeMTMC-reID_baseline (Matlab)
github: https://github.com/layumi/DukeMTMC-reID_baseline
Code for IDE baseline on Market-1501
github: https://github.com/zhunzhong07/IDE-baseline-Market-1501
Attribute相关数据集
RAP: http://rap.idealtest.org/
Attribute for Market-1501 and DukeMTMC_reID: https://vana77.github.io/
视频相关数据集
Mars: http://liangzheng.org/Project/project_mars.html
PRID2011: https://www.tugraz.at/institute/icg/research/team-bischof/lrs/downloads/
NLP相关数据集
自然语言搜图像: http://xiaotong.me/static/projects/person-search-language/dataset.html
自然语言搜行人所在视频: http://www.mi.t.u-tokyo.ac.jp/projects/person_search
1st Workshop on Target Re-Identification and Multi-Target Multi-Camera Tracking
https://reid-mct.github.io/
Target Re-Identification and Multi-Target Multi-Camera Tracking
http://openaccess.thecvf.com/CVPR2017_workshops/CVPR2017_W17.py
Person Re-Identification: Theory and Best Practice
http://www.micc.unifi.it/reid-tutorial/
Listed in No Particular Order
Re-id Resources
https://wangzwhu.github.io/home/re_id_resources.html
Zhuanzhi
http://www.zhuanzhi.ai/topic/2001183057160970
Zhihu
行人重识别: https://zhuanlan.zhihu.com/personReid
Person Re-id: https://zhuanlan.zhihu.com/re-id
Topci: https://www.zhihu.com/topic/20087378/hot
Blogs
行人重识别简介: https://www.jianshu.com/p/98cc04cca0ae
基于深度学习的Person Re-ID(综述): https://blog.csdn.net/linolzhang/article/details/71075756
行人再识别(行人重识别)【包含与行人检测的对比】: https://blog.csdn.net/liuqinglong110/article/details/41699861
行人重识别综述(Person Re-identification: Past, Present and Future): https://blog.csdn.net/auto1993/article/details/74091803
行人重识别: http://cweihang.cn/ml/reid/
Market-1501 数据集在清华大学校园中采集,夏天拍摄,在 2015 年构建并公开。它包括由6个摄像头(其中5个高清摄像头和1个低清摄像头)拍摄到的 1501 个行人、32668 个检测到的行人矩形框。每个行人至少由2个摄像头捕获到,并且在一个摄像头中可能具有多张图像。训练集有 751 人,包含 12,936 张图像,平均每个人有 17.2 张训练数据;测试集有 750 人,包含 19,732 张图像,平均每个人有 26.3 张测试数据。3368 张查询图像的行人检测矩形框是人工绘制的,而 gallery 中的行人检测矩形框则是使用DPM检测器检测得到的。该数据集提供的固定数量的训练集和测试集均可以在single-shot或multi-shot测试设置下使用。
Market-1501
├── bounding_box_test
├── 0000_c1s1_000151_01.jpg
├── 0000_c1s1_000376_03.jpg
├── 0000_c1s1_001051_02.jpg
├── bounding_box_train
├── 0002_c1s1_000451_03.jpg
├── 0002_c1s1_000551_01.jpg
├── 0002_c1s1_000801_01.jpg
├── gt_bbox
├── 0001_c1s1_001051_00.jpg
├── 0001_c1s1_009376_00.jpg
├── 0001_c2s1_001976_00.jpg
├── gt_query
├── 0001_c1s1_001051_00_good.mat
├── 0001_c1s1_001051_00_junk.mat
├── query
├── 0001_c1s1_001051_00.jpg
├── 0001_c2s1_000301_00.jpg
├── 0001_c3s1_000551_00.jpg
└── readme.txt
1) “bounding_box_test”——用于测试集的 750 人,包含 19,732 张图像,前缀为 0000 表示在提取这 750 人的过程中DPM检测错的图(可能与query是同一个人),-1 表示检测出来其他人的图(不在这 750 人中)
2) “bounding_box_train”——用于训练集的 751 人,包含 12,936 张图像
3) “query”——为 750 人在每个摄像头中随机选择一张图像作为query,因此一个人的query最多有 6 个,共有 3,368 张图像
4) “gt_query”——matlab格式,用于判断一个query的哪些图片是好的匹配(同一个人不同摄像头的图像)和不好的匹配(同一个人同一个摄像头的图像或非同一个人的图像)
5) “gt_bbox”——手工标注的bounding box,用于判断DPM检测的bounding box是不是一个好的box
以 0001_c1s1_000151_01.jpg 为例
1) 0001 表示每个人的标签编号,从0001到1501;
2) c1 表示第一个摄像头(camera1),共有6个摄像头;
3) s1 表示第一个录像片段(sequece1),每个摄像机都有数个录像段;
4) 000151 表示 c1s1 的第000151帧图片,视频帧率25fps;
5) 01 表示 c1s1_001051 这一帧上的第1个检测框,由于采用DPM检测器,对于每一帧上的行人可能会框出好几个bbox。00 表示手工标注框
Cumulative Matching Characteristics (CMC) curves 是目前行人重识别领域最流行的性能评估方法。考虑一个简单的 single-gallery-shot 情形,每个数据集中的ID(gallery ID)只有一个实例. 对于每一次的识别(query), 算法将根据要查询的图像(query) 到所有gallery samples的距离从小到大排序,CMC top-k accuracy 计算如下:
Acc_k = 1, if top-k ranked gallery samples contain query identityAcc_k = 0, otherwise
这是一个 shifted step function, 最终的CMC 曲线(curve) 通过对所有queries的shifted step functions取平均得到。尽管在 single-gallery-shot 情形下,CMC 有很明确的定义,但是在 multi-gallery-shot 情形下,它的定义并不明确,因为每个gallery identity 可能存在多个instances.
Market-1501中 Query 和 gallery 集可能来自相同的摄像头视角,但是对于每个query identity, 他/她的来自同一个摄像头的 gallery samples 会被排除掉。对于每个 gallery identity,他们不会只随机采样一个instance. 这意味着在计算CMC时, query 将总是匹配 gallery 中“最简单”的正样本,而不关注其他更难识别的正样本。bounding_box_test 文件夹是 gallery 样本,bounding_box_train 文件夹是 train 样本,query 文件夹是 query 样本
由上面可以看出,在 multi-gallery-shot 情形下,CMC评估具有缺陷。因此,也使用 mAP(mean average precsion)作为评估指标。mAP可认为是PR曲线下的面积,即平均的查准率。
If you use this dataset, please kindly cite this paper:1
2
3
4
5
6@inproceedings{zheng2015scalable,
title={Scalable Person Re-identification: A Benchmark},
author={Zheng, Liang and Shen, Liyue and Tian, Lu and Wang, Shengjin and Wang, Jingdong and Tian, Qi},
booktitle={Computer Vision, IEEE International Conference on},
year={2015}
}
DukeMTMC 数据集是一个大规模标记的多目标多摄像机行人跟踪数据集。它提供了一个由 8 个同步摄像机记录的新型大型高清视频数据集,具有 7,000 多个单摄像机轨迹和超过 2,700 多个独立人物,DukeMTMC-reID 是 DukeMTMC 数据集的行人重识别子集,并且提供了人工标注的bounding box。
DukeMTMC-reID
├── bounding_box_test
├── 0002_c1_f0044158.jpg
├── 3761_c6_f0183709.jpg
├── 7139_c2_f0160815.jpg
├── bounding_box_train
├── 0001_c2_f0046182.jpg
├── 0008_c3_f0026318.jpg
├── 7140_c4_f0175988.jpg
├── query
├── 0005_c2_f0046985.jpg
├── 0023_c4_f0031504.jpg
├── 7139_c2_f0160575.jpg
└── CITATION_DukeMTMC.txt
└── CITATION_DukeMTMC-reID.txt
└── LICENSE_DukeMTMC.txt
└── LICENSE_DukeMTMC-reID.txt
└── README.md
从视频中每 120 帧采样一张图像,得到了 36,411 张图像。一共有 1,404 个人出现在大于两个摄像头下,有 408 个人 (distractor ID) 只出现在一个摄像头下。
1) “bounding_box_test”——用于测试集的 702 人,包含 17,661 张图像(随机采样,702 ID + 408 distractor ID)
2) “bounding_box_train”——用于训练集的 702 人,包含 16,522 张图像(随机采样)
3) “query”——为测试集中的 702 人在每个摄像头中随机选择一张图像作为 query,共有 2,228 张图像
以 0001_c2_f0046182.jpg 为例
1) 0001 表示每个人的标签编号;
2) c2 表示来自第二个摄像头(camera2),共有 8 个摄像头;
3) f0046182 表示来自第二个摄像头的第 46182 帧。
Figure. The image distribution of DukeMTMC-reID training set. We note that the median of images per ID is 20. But some ID may contain lots of images, which may compromise some algorithms. (For example, ID 5388 contains 426 images.)
Thank Xun for suggestions.
This picture is from DukeMTMC Homepage.
(Matlab)To evaluate, you need to calculate your gallery and query feature (i.e., 17661x2048 and 2228x2048 matrix) and save them in advance. Then download the codes in this repository. You just need to change the image path and the feature path in the evaluation_res_duke_fast.m and run it to evaluate.
(Python)We also provide an evaluation code in python. You may refer to here.
We release our baseline training code and pretrained model in [Matconvnet Version] and [Pytorch Version]. You can choose one of the two tools to conduct the experiment. Furthermore, you may try our new Pedestrain Alignment Code which combines person alignment with re-ID.
Or you can directly download the finetuned ResNet-50 baseline feature. You can download it from GoogleDriver or BaiduYun, which includes the feature of training set, query set and gallery set. The DukeMTMC-reID LICENSE is also included.
If you use this dataset, please kindly cite the following two papers:1
2
3
4
5
6
7
8
9
10
11
12@inproceedings{zheng2017unlabeled,
title={Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in vitro},
author={Zheng, Zhedong and Zheng, Liang and Yang, Yi},
booktitle={Proceedings of the IEEE International Conference on Computer Vision},
year={2017}
}
@inproceedings{ristani2016MTMC,
title = {Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking},
author = {Ristani, Ergys and Solera, Francesco and Zou, Roger and Cucchiara, Rita and Tomasi, Carlo},
booktitle = {European Conference on Computer Vision workshop on Benchmarking Multi-Target Tracking},
year = {2016}
}
CUHK-03
├── “detected”── 5 x 1 cell
├── 843x10 cell
├── 440x10 cell
├── 77x10 cell
├── 58x10 cell
├── 49x10 cell
├── “labeled”── 5 x 1 cell
├── 843x10 cell
├── 440x10 cell
├── 77x10 cell
├── 58x10 cell
├── 49x10 cell
├── “testsets”── 20 x 1 cell
├── 100 x 2 double matrix
(1)”detected”—— 5 x 1 cells,由机器标注,每个 cell 中包含一对摄像头组采集的照片,如下所示:
每个摄像头组由 M x 10 cells 组成,M 为行人索引,前 5 列和后 5 列分别来自同一组的不同摄像头。
cell 内每个元素为一幅 H x W x 3 的行人框图像(uint8 数据类型),个别图像可能空缺,为空集。
(2)”labeled”—— 5 x 1 cells,行人框由人工标注,格式和内容和”detected”相同。
(3)”testsets”—— 20 x 1 cells,测试协议,由 20 个 100 x 2 double 类型矩阵组成 (重复二十次)。
100 x 2 double,100 行代表 100 个测试样本,第 1 列为摄像头 pair 索引,第 2 列为行人索引。
CUHK-03的测试协议有两种。
第一种为旧的版本(参考文献 [1], 即数据集的出处),参见数据集中的’testsets’测试协议。具体地说,即随机选出100个行人作为测试集,1160 个行人作为训练集,100 个行人作为验证集(这里总共 1360 个行人而不是 1467 个,这是因为实验中没有用到摄像头组pair 4 和 5 的数据),重复二十次。这种测试协议是 single-shot setting.
第二种测试协议(参考文献 [2])类似于 Market-1501 ,它将数据集分为包含 767 个行人的训练集和包含 700 个行人的测试集。在测试阶段,我们随机选择一张图像作为 query,剩下的作为 gallery,这样的话,对于每个行人,有多个 ground truth 在 gallery 中。(新测试协议可以参考这里)
If you use this dataset, please kindly cite the following paper:1
2
3
4
5
6@inproceedings{li2014deepreid,
title={DeepReID: Deep Filter Pairing Neural Network for Person Re-identification},
author={Li, Wei and Zhao, Rui and Xiao, Tong and Wang, Xiaogang},
booktitle={CVPR},
year={2014}
}
现有的深度学习思想可能认为深层的网络一般会比浅层的网络效果好,如果要进一步地提升模型的准确率,最直接的方法就是把网络设计得越深越好,这样模型的准确率也就会越来越准确。例如在图像处理任务中,CNN 能够提取 low / mid / high-level 的特征,网络的层数越多,意味着能够提取到不同 level 的特征越丰富。越深的网络提取的特征越抽象,越具有语义信息。
Kaiming 博士在论文中做了这样一组实验:在 CIFAR-10 数据集上分别训练了一个 20 层和 56 层的 plain network (卷积、池化、全连接构成的传统 CNN ),发现 56 层网络的训练误差和测试误差都大于 20 层网络的训练误差,即网络层数加深时,模型效果却越来越差,在训练集上的准确率甚至下降了,因此这个显然不是由于 overfitting 导致的,因为 overfitting 应该表现为在训练集上效果更好才对。
1.为什么不能直接简单地增加层数?
神经网络的深度加深,一个众所周知的问题就是梯度的消失和爆炸 (gradients vanishing / gradients exploding),它会导致深层的网络参数得不到有效的校正信号或使得训练难以收敛,通过正则化初始化或者中间的正则化层 (Batch Normalization) 方法可以得到有效的缓解,但并不能解决这里提出的问题。
2.为什么网络层数加深时,网络的性能反而下降?
我们假设现在有一个浅层 (假设层数为 n) 的神经网络plain network A ,具有比较理想的输出结果,现在在这个神经网络的后边再加 m 层得到一个新的神经网络 B,我们发现输出结果的准确度反而下降了。这是不合理的,因为如果后边加上的那 m 层是对前 n 层的输出结果做恒等映射 (identity mapping),至少 B 也能和 A 的性能持平才对。但是实验的结果表明现在的求解方法并不能得到理想的结果,这说明 B 网络在学习恒等映射的时候出了问题,也就是传统网络 (plain networks) 很难去学习恒等映射,这就是所谓的退化 (degradation) 现象。
如果深层网络的后面那些层是恒等映射,那么模型就退化为一个浅层网络,现在要解决的就是如何学习恒等映射函数。但是直接让一些层去拟合一个潜在的恒等映射函数 H(x) = x 是很困难的,但是如果使用残差函数 H(x) = F(x) + x,F(x) = H(x) - x,如果能使 F(x) = 0,H(x) 就是恒等映射。
网络输入是 x,网络的输出是 F(x),网络要拟合的目标是 H(x),传统网络的训练目标是 F(x) = H(x)。
残差网络,则是把传统网络的输出 F(x) 处理一下,加上输入 x,变成 F(x) + x 作为最终的输出,训练目标是 F(x) = H(x) - x。
现在我们要训练一个深层的网络,它可能过深,假设存在一个性能最强的完美网络 N,与它相比我们的网络中必定有一些层是多余的,那么这些多余的层的训练目标是恒等变换,只有达到这个目标我们的网络性能才能跟 N 一样。对于这些需要实现恒等变换的多余的层,要拟合的目标就成了 H(x) = x,在传统网络中,网络的输出目标是 F(x) = x,这比较困难,而在残差网络中,拟合的目标成了 x - x = 0,网络的输出目标为 F(x) = 0,这比前者要容易得多。
这里的 F(x) + x 为什么是 x 而不是其他值?因为多余的层的目标是恒等变换,即 F(x) + x = x,那 F(x) 的训练目标就是 0,比较容易。如果是其他,比如 x/2 ,那 F(x) 的训练目标就是 x/2,是一个非 0 的值,比 0 难实现。Kaiming 博士的另一篇文章[2]中探讨了这个问题,对6种结构的残差结构进行实验比较证明 F(x) 加上输入值 x 的效果最好。
在上图的残差网络结构图中,通过“shortcut connections (捷径连接)”的方式,直接把输入x传到输出作为初始结果,输出结果为 H(x) = F(x) + x,当 F(x) = 0 时,那么 H(x) = x,也就是上面所提到的恒等映射。于是,ResNet相当于将学习目标改变了,不再是学习一个完整的输出,而是目标值H(X)和x的差值,即所谓的残差F(x) = H(x) - x,因此,后面的训练目标就是要将残差结果逼近于 0,使得随着网络加深,准确率不下降。
它有二层,如下表达式,其中 $\sigma$ 代表非线性函数ReLU:
$$\mathcal{F} = W_2\sigma(W_1x)$$
然后通过一个 shortcut connection,和第 2 个 ReLU,获得输出 y:
$${y}= \mathcal{F}({x}, {W_{i}}) + {x}.$$
F(x) 与 x 相加就是逐元素相加,但是如果两者维度不同,需要给 x 执行一个线性变换来匹配维度,如下式:
$${y}= \mathcal{F}({x}, {W_{i}}) + W_s{x}.$$
实验证明,这个残差块往往需要两层以上,单单一层的残差块 $y = W_1x + x$ 并不能起到提升作用。
这种残差跳跃式的结构,打破了传统的神经网络 n - 1 层的输出只能给 n 层作为输入的惯例,使某一层的输出可以直接跨过几层作为后面某一层的输入,其意义在于为叠加多层网络而使得整个学习模型的错误率不降反升的难题提供了新的方向 (后来的 DenseNet)。至此,神经网络的层数可以超越之前的约束,达到几十层、上百层甚至千层,为高级语义特征提取和分类提供了可行性。
作者由 VGG19 设计出了 plain network 和 Resnet-34,如下图中部和右侧网络。
下图是Resnet对应于ImageNet的结构框架。中括号中为残差块的参数,多个残差块进行堆叠。下采样由 stride 为 2 的 conv3_1、conv4_1 和 conv5_1 来实现。
左图是两个 3 x 3 x 256的卷积,参数数目: 3 x 3 x 256 x 256 x 2 = 1179648;右图是第一个 1 x 1 的卷积把 256 维通道降到 64 维,然后在最后通过 1 x 1 卷积恢复,整体上用的参数数目:1 x 1 x 256 x 64 + 3 x 3 x 64 x 64 + 1 x 1 x 64 x 256 = 69632,右图的参数量比左图减少了 16.94 倍。对于常规的ResNet,可以用于34层或者更少的网络中(左图),对于更深的网络(如50 / 101 / 152层),则使用右图,其目的是减少计算和参数量。
1 | def residual_block(x, out_channels, down_sample, projection=False): |
确保您的环境符合以下要求:
为了下载NCCL,请确保您已注册NVIDIA开发者账号。
在Ubuntu上安装NCCL需要您首先向包含NCCL软件包的APT系统添加存储库,然后通过APT 安装NCCL软件包,有两个存储库可用——本地存储库和网络存储库。建议选择更新版本以便在发布新版本时轻松升级。
安装存储库
对于本地NCCL存储库:sudo dpkg -i nccl-repo-<version>.deb
对于网络存储库:sudo dpkg -i nvidia-machine-learning-repo-<version>.deb
更新APT数据库:sudo apt update
利用APT安装libnccl2。此外,如果您需要使用NCCL编译应用程序,则同时安装 libnccl-dev包。
如果您正在使用网络存储库,则使用以下命令。
sudo apt install libnccl2 libnccl-dev
如果您希望保留较旧版本的CUDA,请指定特定版本,例如:
sudo apt-get install libnccl2=2.0.0-1+cuda8.0 libnccl-dev=2.0.0-1+cuda8.0
请参阅下载页面以了解确切的软件包版本。
/usr/local
1 | cd /usr/local |
/usr/local/nccl-<版本>/
。1 | sudo apt-get install build-essential |
下载opencv2.4.9安装包至你的路径1
2
3
4
5
6unzip opencv-2.4.9.zip
cd opencv-2.4.9
mkdir release
cd release
# 下面这行cmake参数网上有多种选择,此处选择不包含CUDA和EIGEN,避免后续编译出错。默认安装至/usr/local目录下
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_CUDA=OFF -D WITH_OPENMP=ON -D WITH_QT=ON -D WITH_EIGEN=OFF ..
1 | sudo make -j4 |
1 | # 默认至目录:/usr/local/lib,可自定义安装目录,加参数--prefix="/usr/local/openmpi" |
1 | cd opencv-2.4.9/samples/c |
下载CUDA对应版本:https://developer.nvidia.com/cuda-toolkit-archive,选择**CUDA Toolkit 9.0 (Sept 2017)->Windows->x86_64->10->exe(local)
下载完成后,打开直接点击next下一步进行安装,安装路径默认为C盘
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\libnvvp
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\lib\x64
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\extras\CUPTI\libx64
nccv -V
检查是否成功 进入 https://developer.nvidia.com/rdp/cudnn-archive ,选择对应的7.0版本下载即可。
解压压缩包,把压缩包中bin,include,lib中的文件分别拷贝到C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0目录的对应目录下。
1 | pip install tensorflow-gpu |
1 | wget http://www.open-mpi.org/software/ompi/v1.8/downloads/openmpi-1.8.0.tar.gz |
1 | tar zxvf openmpi-1.8.0.tar.gz |
1 | # 默认至目录:/usr/local/lib,可自定义安装目录,加参数--prefix="/usr/local/openmpi" |
1 | sudo make |
1 | sudo gedit ~/.bashrc |
1 | cd openmpi-1.8.0/examples |