While significant progress have been made in the field of Natural Language Processing (NLP), leading the commercially available products, Sign Language Recognition (SLR) is still in its infancy. The lack of large-scale sign language datasets makes it hard to leverage new Deep Learning methods. In this paper, we introduce LSFB-CONT, a large scale dataset suited for continuous SLR along with LSFB-ISOL, a subset of LSFB-CONT for isolated SLR. Baseline SLR experiments are conducted on LSFB-ISOL and the reached accuracy measures are compared with those obtained on previous datasets. The results suggest that state-of-the-art models for action recognition still lack sufficient internal representation power to capture the high level of variations of a sign language.