Edit model card

InternViT-6B-224px

[📂 GitHub] [🆕 Blog] [📜 InternVL 1.0 Paper] [📜 InternVL 1.5 Report]

[🗨️ Chat Demo] [🤗 HF Demo] [🚀 Quick Start] [📖 中文解读] [📖 Documents]

Model Details

  • Model Type: vision foundation model, feature backbone
  • Model Stats:
    • Params (M): 5903
    • Image size: 224 x 224
  • Pretrain Dataset: LAION-en, LAION-COCO, COYO, CC12M, CC3M, SBU, Wukong, LAION-multi
  • Note: This model has 48 blocks, and we found that using the output after the fourth-to-last block worked best for VLLM. Therefore, when building a VLLM with this model, please use the features from the fourth-to-last layer.

Linear Probing Performance

See this document for more details about the linear probing evaluation.

IN-1K IN-ReaL IN-V2 IN-A IN-R IN-Sketch
88.2 90.4 79.9 77.5 89.8 69.1

Model Usage (Image Embeddings)

import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor

model = AutoModel.from_pretrained(
    'OpenGVLab/InternViT-6B-224px',
    torch_dtype=torch.bfloat16,
    low_cpu_mem_usage=True,
    trust_remote_code=True).cuda().eval()

image = Image.open('./examples/image1.jpg').convert('RGB')

image_processor = CLIPImageProcessor.from_pretrained('OpenGVLab/InternViT-6B-224px')

pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()

outputs = model(pixel_values)

Citation

If you find this project useful in your research, please consider citing:

@article{chen2023internvl,
  title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
  author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
  journal={arXiv preprint arXiv:2312.14238},
  year={2023}
}
@article{chen2024far,
  title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
  author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
  journal={arXiv preprint arXiv:2404.16821},
  year={2024}
}
Downloads last month
781
Inference API
Inference API (serverless) does not yet support model repos that contain custom code.

Model tree for OpenGVLab/InternViT-6B-224px

Finetunes
1 model
Merges
2 models

Datasets used to train OpenGVLab/InternViT-6B-224px

Collection including OpenGVLab/InternViT-6B-224px